创建 nova 数据库,授权 nova 用户访问它:
mysql -uroot -p create database nova; grant all on nova.* to 'nova'@'%' identified by 'nova'; quit;在 /etc/nova/api-paste.ini 中修改 autotoken 验证部分:
[filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory auth_host = 172.16.0.51 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = password signing_dir = /tmp/keystone-signing-nova # Workaround for https://bugs.launchpad.net/nova/+bug/1154809 auth_version = v2.0修改 /etc/nova/nova.conf, 类似下面这样:
[DEFAULT] # LOGS/STATE debug = False verbose = True logdir = /var/log/nova state_path = /var/lib/nova lock_path = /var/lock/nova rootwrap_config = /etc/nova/rootwrap.conf dhcpbridge = /usr/bin/nova-dhcpbridge # SCHEDULER compute_scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler ## VOLUMES volume_api_class = nova.volume.cinder.API # DATABASE sql_connection = mysql://nova:nova@172.16.0.51/nova # COMPUTE libvirt_type = kvm compute_driver = libvirt.LibvirtDriver instance_name_template = instance-%08x api_paste_config = /etc/nova/api-paste.ini # COMPUTE/APIS: if you have separate configs for separate services # this flag is required for both nova-api and nova-compute allow_resize_to_same_host = True # APIS osapi_compute_extension = nova.api.openstack.compute.contrib.standard_extensions ec2_dmz_host = 172.16.0.51 s3_host = 172.16.0.51 metadata_host = 172.16.0.51 metadata_listen = 0.0.0.0 # RABBITMQ rabbit_host = 172.16.0.51 rabbit_password = guest # GLANCE image_service = nova.image.glance.GlanceImageService glance_api_servers = 172.16.0.51:9292 # NETWORK network_api_class = nova.network.quantumv2.api.API quantum_url = :9696 quantum_auth_strategy = keystone quantum_admin_tenant_name = service quantum_admin_username = quantum quantum_admin_password = password quantum_admin_auth_url = :35357/v2.0 libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver # NOVNC CONSOLE novncproxy_base_url = :6080/vnc_auto.html # Change vncserver_proxyclient_address and vncserver_listen to match each compute host vncserver_proxyclient_address = 192.168.8.51 vncserver_listen = 0.0.0.0 # AUTHENTICATION auth_strategy = keystone [keystone_authtoken] auth_host = 172.16.0.51 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = password signing_dir = /tmp/keystone-signing-nova同步数据库,启动 nova 相关服务:
nova-manage db sync cd /etc/init.d/; for i in $( ls nova-* ); do sudo /etc/init.d/$i restart; done检查 nova 相关服务笑脸
nova-manage service list Binary Host Zone Status State Updated_At nova-consoleauth control internal enabled 2013-03-31 09:55:43 nova-cert control internal enabled 2013-03-31 09:55:42 nova-scheduler control internal enabled 2013-03-31 09:55:41 nova-conductor control internal enabled 2013-03-31 09:55:42 Horizon安装 horizon:
apt-get install openstack-dashboard memcached vim /etc/openstack-dashboard/local_settings.py # Enable the Ubuntu theme if it is present. #try: # from ubuntu_theme import * #except ImportError: # pass重新加载 apache2 和 memcache:
/etc/init.d/apache2 restart /etc/init.d/memcached restart现在可以通过浏览器 使用 admin:password 来登录界面。
网络节点 网络设置 # cat /etc/network/interfaces auto eth0 iface eth0 inet static address 172.16.0.52 netmask 255.255.0.0 auto eth1 iface eth1 inet static address 10.10.10.52 netmask 255.255.255.0 auto eth2 iface eth2 inet manual # /etc/init.d/networking restart # ifconfig eth2 192.168.8.52/24 up # route add default gw 192.168.8.1 dev eth2 # echo 'nameserver 8.8.8.8' > /etc/resolv.conf 添加源添加 Grizzly 源,并升级系统
cat > /etc/apt/sources.list.d/grizzly.list << _GEEK_ deb precise-updates/grizzly main deb precise-proposed/grizzly main _GEEK_ apt-get update apt-get upgrade apt-get install ubuntu-cloud-keyring设置 ntp 和开启路由转发:
# apt-get install ntp # sed -i 's/server ntp.ubuntu.com/server 172.16.0.51/g' /etc/ntp.conf # service ntp restart # vim /etc/sysctl.conf net.ipv4.ip_forward=1 # sysctl -p OpenVSwitch安装 openVSwitch:
apt-get install openvswitch-switch openvswitch-brcompat设置 ovs-brcompatd 启动:
sed -i 's/# BRCOMPAT=no/BRCOMPAT=yes/g' /etc/default/openvswitch-switch启动 openvswitch-switch:
/etc/init.d/openvswitch-switch restart * ovs-brcompatd is not running # brcompatd 没有启动,尝试再次启动. * ovs-vswitchd is not running * ovsdb-server is not running * Inserting openvswitch module * /etc/openvswitch/conf.db does not exist * Creating empty database /etc/openvswitch/conf.db * Starting ovsdb-server * Configuring Open vSwitch system IDs * Starting ovs-vswitchd * Enabling gre with iptables再次启动,直到 ovs-brcompatd、ovs-vswitchd、ovsdb-server等服务都启动:
# /etc/init.d/openvswitch-switch restart # lsmod | grep brcompat brcompat 13512 0 openvswitch 84038 7 brcompat如果还是启动不了 brcompat,执行下面命令:
/etc/init.d/openvswitch-switch force-reload-kmod创建网桥:
ovs-vsctl add-br br-int # br-int 用于 vm 整合 ovs-vsctl add-br br-ex # br-ex 用于从互联网上访问 vm ovs-vsctl add-port br-ex eth2 # br-ex 桥接到 eth2做完上面操作后,eth2 这个网卡是没有工作的,需修改网卡配置文件:
# ifconfig eth2 0 # ifconfig br-ex 192.168.8.52/24 # route add default gw 192.168.8.1 dev br-ex # echo 'nameserver 8.8.8.8' > /etc/resolv.conf # vim /etc/network/interfaces auto eth0 iface eth0 inet static address 172.16.0.52 netmask 255.255.0.0 auto eth1 iface eth1 inet static address 10.10.10.52 netmask 255.255.255.0 auto eth2 iface eth2 inet manual up ifconfig $IFACE 0.0.0.0 up down ifconfig $IFACE down auto br-ex iface br-ex inet static address 192.168.8.52 netmask 255.255.255.0 gateway 192.168.8.1 dns-nameservers 8.8.8.8重启网卡可能会出现:
/etc/init.d/networking restart RTNETLINK answers: File exists Failed to bring up br-ex.br-ex 可能有 ip 地址,但没有网关和 DNS,需要手工配置一下,或者重启机器. 重启机器后就正常了
文档更新:发现网络节点的 eth2 网卡在系统重启后没有激活,写入到 rc.local中:
echo 'ifconfig eth2 up' >> /etc/rc.local查看桥接的网络
ovs-vsctl list-br ovs-vsctl show