OpenStack:Nova
Command
-
nova list
- 활성화된 서비스(인스턴스)목록을 출력한다.
-
nova service-list
- 서비스 목록을 출력한다.
File
-
/etc/nova/nova.conf
- NOVA 설정 파일.
-
/var/log/nova/nova-compute.log
- NOVA 로그 파일.
Configuration
/etc/nova/nova.conf
파일을 통하여 설정할 수 있다.
Nova API Metadata Service
- [추천] Play with OpenStack instance metadata
- Introduction of Metadata service in Openstack
- Instance can not access Openstack metadata service
- What is this IP address: 169.254.169.254?
- OpenStack: 가상머신의 네트워크가 안될 때 quantum의 체크 사항들 (metadata 관련 내용 포함)
- Amazon EC2, private, public ip, hostname, or etc
- metadata provided admin_pass not working with cloudbase provided ws2012 image #11
- unable to ping metadata server 169.254.169.254
- metadata service not reachable from instance in neutron single flat provider network
- How to use meta-data service for VM with provider network
- [추천] OpenStack cloud-init cannot contact or ping 169.254.169.254 to establish meta-data connection – fix
인스턴스에서의 메타데이터 요청을 허용합니다. nova-api-metadata 서비스는 일반적으로 nova-network 설치할 때와 다중 호스트 모드를 실행할때 사용합니다. 더 자세한 내용은 OpenStack Cloud Administrator Guide에서 Metadata service 를 확인합니다.
First check the OpenStack official documentation about the metadata service. Metadata Ideally if you run an OpenStack production environment you will opt for a multi compute node solution which requires nova metadata service on each nova-compute node (for performance purpose). You can assign the metadata server like so:
- Use the
metadata_host
option in nova.conf, specify the IP address of the node running nova-api - Run the nova-api service on each nova-compute node and specify the
enabled_apis = metadata
flag in nova.conf - Or the last option (my favorite), simply run the
nova-api-metadata
(with the same name package) service on each nova-compute node. It doesn’t require any modification in nova.conf (default options are enough)
Flags related to metadata in nova.conf
:
metadata_listen_port = 8775
metadata_host = 172.17.1.3
metadata_manager = nova.api.manager.MetadataManager
quota_metadata_items = 128
metadata_listen = 0.0.0.0
enabled_apis = [‘ec2’, ‘osapi_compute’, ‘osapi_volume’, ‘metadata’]
metadata_port = 8775
Restart
RDO Project에서는 아래와 같이 사용하면 된다.
$ sudo systemctl | grep nova
$ sudo systemctl restart openstack-nova-api.service
$ sudo systemctl restart openstack-nova-cert.service
$ sudo systemctl restart openstack-nova-compute.service
$ sudo systemctl restart openstack-nova-conductor.service
$ sudo systemctl restart openstack-nova-consoleauth.service
$ sudo systemctl restart openstack-nova-novncproxy.service
$ sudo systemctl restart openstack-nova-scheduler.service
VGA PCI passthrough
Troubleshooting
KVM: Permission denied
- Bug 950436 - Could not access KVM kernel module: Permission denied
- Re: libvirt-users - virt-install: failed to initialize KVM: Permission denied
- Error: No valid host was found. There are not enough hosts available - closed
Horizon에서 인스턴스를 생성할 경우 아래와 같은 에러가 발생될 수 있다.
로그파일을 확인하면 아래와 같은 에러가 나타날 수 있다.
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 96, in cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 117, in tmpcb
callback(*args, **kwargs)
File "/usr/share/virt-manager/virtManager/domain.py", line 1160, in startup
self._backend.create()
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 692, in create
if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
libvirtError: internal error Process exited while reading console log output: char device redirected to /dev/pts/1 (label charserial0)
Could not access KVM kernel module: Permission denied
failed to initialize KVM: Permission denied
이 경우 libvirt의 SELinux 관련 컨텍스트가 막혀있을 가능성이 있다. 이 곳 설정을 확인하자.
귀찮다면 그냥 비활성화 하자!
List of OpenStack Service
Main | |
Storage | |
Shared services | |
Higher-level services | |
ETC |