15. ovirt-shell
和许多虚拟化平台一样,ovirt也有自己的shell,并且在功能上有自己的特点。
连接ovirt-shell
yum install ovirt-engine-cli
ovirt-shell -l https://192.168.1.190 -u admin@internal -I
连接以后,可双击tab以自动补全。 你可以看出来,在ovirt中,vm、host、cluster等均可以看作“对象”,而其操作比如“start”、“stop”等可看作方法。
和许多虚拟化平台一样,ovirt也有自己的shell,并且在功能上有自己的特点。
连接以后,可双击tab以自动补全。 你可以看出来,在ovirt中,vm、host、cluster等均可以看作“对象”,而其操作比如“start”、“stop”等可看作方法。
终于到这一章了。 期望这一章的读者不要盲目跟风,找到适合自己的便好。
使用ovirt-shell在一定程度上适合于某些场景。
============================================================================ >>> connected to oVirt manager 3.4.0.0 <<< ============================================================================
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 |
|
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[oVirt shell (connected)]#
参考vdsm-hooks。
LVM扩容 普通扩容
ShellInABox
Foreman OpenStack Network OpenStack Image
构造目录,形如:
. ├── 37e0e64b-5445-4bc3-8675-ceff4637e8e3/ │ ├── dom_md/ │ │ ├── ids │ │ ├── inbox │ │ ├── leases │ │ ├── metadata │ │ └── outbox │ ├── /blog/images/ │ └── master/ │ ├── tasks/ │ └── vms/ └── __DIRECT_IO_TEST__
创建_leases_文件:
303000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000 0000 | xxd -r -p > leases
添加如下内容至_metadata_文件:
CLASS=Backup DESCRIPTION=Export IOOPTIMEOUTSEC=10 LEASERETRIES=3 LEASETIMESEC=60 LOCKPOLICY= LOCKRENEWALINTERVALSEC=5 MASTER_VERSION=0 POOL_UUID= REMOTE_PATH=192.168.1.108:/home/nfs_exports ROLE=Regular SDUUID=37e0e64b-5445-4bc3-8675-ceff4637e8e3 TYPE=NFS VERSION=0
使用如下脚本(参考源码中packaging/setup/plugins/ovirt-engine-setup/config/iso_domain.py)计算SHA校验值,并将其填入_metadata_中的_SHA_CKSUM段:
import hashlib from optparse import OptionParser
if __name__ == "__main__": parser = OptionParser() (options, args) = parser.parse_args() if len(args) != 1: parser.error("Missing metadata file") f = open(args[0], "r") mds = {} for line in f: line = line.strip() if not line or line.startswith('#'): continue try: key,value = line.split('=', 1) if key == '_SHA_CKSUM': continue mds[key] = value except Exception, e: continue f.close() #print mds csum = hashlib.sha1() keys = mds.keys() keys.sort() for key in keys: value = mds[key] line = "%s=%s" % (key, value) csum.update(line) print(csum.hexdigest())
更改权限:
然后可以作为空导出域进行导入。
我想,你看到这的话应该已经有了一个数据中心、几个宿主机,也可能有一个虚拟机(engine),还差一个存储虚拟机镜像的地方就可以具有基本功能的oVirt平台了。
Ok, here we go.
对于第11节的普通oVirt、第12节的ha平台,你可能需要添加更多节点以支持更好的SLA(service level agreement)。 添加节点目前有三种方式,一种是通过oVirt的节点ISO直接安装再加入,另一种是直接将现有CentOS或者Fedora转化为节点,另外还可以指定使用外部节点(Foreman),在此我们使用第二种方法。
存储域有3种,Data(数据域)、ISO(ISO域)、Export(导出域)。 其中,数据域是为必需,在创建任何虚拟机之前需要有一个可用的数据域用于存储虚拟磁盘以及快照文件;ISO域中可以存放ISO和VFD格式的系统镜像或者驱动文件;导出域用于导出或导入OVF格式的虚机。 而根据数据域的存储类型,我们有5种(NFS、POSIX兼容、Glusterfs、iSCSI、光纤)可选,在此,建议(为什么?)选择glusterfs导出的NFS。
注意:确保存储域权限为36:36,也即vdsm.kvm。
对于初次使用oVirt的用户,建议使用此种搭建方式,太折腾的话就吓走好多目标读者了。
使用之前的四台机器,分别为gs1.lofyer.org,gs2.lofyer.org,gs3.lofyer.org和gs4.lofyer.org,其中,将gs1作为管理机安装ovirt-engine,其余三台作为节点(node),存储使用已经创建好的glusterfs。
存储可以使用之前的glusterfs,方式为NFS_V3,注意将brick的权限设置为vdsm.kvm或者36:36。
由于engine以及node的网络服务依赖于network而非NetworkManager,我们需要启用前者禁用后者,在每一台服务器上都进行如下类似配置修改网络。
NAME=eth0 DEVICE=eth0 ONBOOT=yes BOOTPROTO=static
IPADDR=192.168.10.101 NETMASK=255.255.255.0 GATEWAY=192.168.10.1 DNS1=192.168.10.1
或者手动添加:
[ovirt-stable] name=Latest oVirt Releases baseurl=http://ovirt.org/releases/stable/rpm/EL/$releasever/ enabled=1 skip_if_unavailable=1 gpgcheck=0
[ovirt-3.4-stable] name=Latest oVirt 3.4.z Releases baseurl=http://ovirt.org/releases/3.4/rpm/EL/$releasever/ enabled=1 skip_if_unavailable=1 gpgcheck=0
[epel] name=Extra Packages for Enterprise Linux 6 - $basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch failovermethod=priority enabled=1 includepkgs=epel-release,python-uinput,puppet,python-lockfile,python-cpopen,python-ordereddict,python-pthreading,python-inotify,python-argparse,novnc,python-ply,python-kitchen,python-daemon,python-websockify,livecd-tools,spice-html5,mom gpgcheck=0
[ovirt-glusterfs-epel] name=GlusterFS is a clustered file-system capable of scaling to several petabytes. baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-\(releasever/\)basearch/ enabled=1 skip_if_unavailable=1 gpgcheck=0
[ovirt-glusterfs-noarch-epel] name=GlusterFS is a clustered file-system capable of scaling to several petabytes. baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/noarch enabled=1 skip_if_unavailable=1 gpgcheck=0
在gs1上运行如下命令。
[ INFO ] Stage: Initializing [ INFO ] Stage: Environment setup Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging.conf'] Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20140508054649.log Version: otopi-1.2.0 (otopi-1.2.0-1.el6) [ INFO ] Stage: Environment packages setup [ INFO ] Stage: Programs detection [ INFO ] Stage: Environment setup [ INFO ] Stage: Environment customization
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
[ INFO ] iptables will be configured as firewall manager.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
|
[ INFO ] Stage: Setup validation
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
[ INFO ] Stage: Transaction setup [ INFO ] Stopping engine service [ INFO ] Stopping websocket-proxy service [ INFO ] Stage: Misc configuration [ INFO ] Stage: Package installation [ INFO ] Stage: Misc configuration [ INFO ] Initializing PostgreSQL [ INFO ] Creating PostgreSQL 'engine' database [ INFO ] Configuring PostgreSQL [ INFO ] Creating Engine database schema [ INFO ] Creating CA [ INFO ] Configuring WebSocket Proxy [ INFO ] Generating post install configuration file '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf' [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up
1 2 3 4 5 6 7 8 9 10 |
|
[ INFO ] Starting engine service [ INFO ] Restarting httpd [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20140508054842-setup.conf' [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20140508054649.log [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ INFO ] Execution of setup completed successfully
至此,管理节点安装结束。
接下来,加入节点以及存储域,请参考第13节。
笔者写此文时oVirt已经更新到3.4。
在此,我们会用到之前创建的distributed-replicate存储,这样可用保证系统服务的高度可用性有所提高。
有以下几点需要注意:
1. 宿主机的CPU架构需要为Westmere(Westmere E56xx/L56xx/X56xx)、Nehalem(Intel Core i7 9xx)、Penryn(Intel Core 2 Duo P9xxx)或者Conroe(Intel Celeron_4x0)中的之一,否则集群类型会与宿主机类型不兼容导致数据中心启动失败。
CPU Family table 参阅 Intel Architecture and Processor Identification With CPUID Model and Family Numbers
2. 建议参考第11节提前安装含有oVirt管理的虚拟机,硬盘格式为RAW,从而在安装管理机时作为OVF导入或者覆盖虚拟磁盘,减少失败风险时间。
每台机器上都要添加作为虚拟机运行的engine的FQDN,此处为ha.lofyer.org。
存储可以使用之前的glusterfs,方式为NFS_V3,注意将brick的权限设置为vdsm.kvm或者36:36。
由于engine以及node的网络服务依赖于network而非NetworkManager,我们需要启用前者禁用后者,在每一台服务器上都进行如下类似配置修改网络。
NAME=eth0 DEVICE=eth0 ONBOOT=yes BOOTPROTO=static
IPADDR=192.168.10.101 NETMASK=255.255.255.0 GATEWAY=192.168.10.1 DNS1=192.168.10.1
或者手动添加:
[ovirt-stable] name=Latest oVirt Releases baseurl=http://ovirt.org/releases/stable/rpm/EL/$releasever/ enabled=1 skip_if_unavailable=1 gpgcheck=0
[ovirt-3.4-stable] name=Latest oVirt 3.4.z Releases baseurl=http://ovirt.org/releases/3.4/rpm/EL/$releasever/ enabled=1 skip_if_unavailable=1 gpgcheck=0
[epel] name=Extra Packages for Enterprise Linux 6 - $basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch failovermethod=priority enabled=1 includepkgs=epel-release,python-uinput,puppet,python-lockfile,python-cpopen,python-ordereddict,python-pthreading,python-inotify,python-argparse,novnc,python-ply,python-kitchen,python-daemon,python-websockify,livecd-tools,spice-html5,mom gpgcheck=0
[ovirt-glusterfs-epel] name=GlusterFS is a clustered file-system capable of scaling to several petabytes. baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-\(releasever/\)basearch/ enabled=1 skip_if_unavailable=1 gpgcheck=0
[ovirt-glusterfs-noarch-epel] name=GlusterFS is a clustered file-system capable of scaling to several petabytes. baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/noarch enabled=1 skip_if_unavailable=1 gpgcheck=0
回答一些列问题,如下所示:
[ INFO ] Stage: Initializing Continuing will configure this host for serving as hypervisor and create a VM where you have to install oVirt Engine afterwards. Are you sure you want to continue? (Yes, No)[Yes]: yes [ INFO ] Generating a temporary VNC password. [ INFO ] Stage: Environment setup Configuration files: [] Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140508182241.log Version: otopi-1.2.0 (otopi-1.2.0-1.el6) [ INFO ] Hardware supports virtualization [ INFO ] Bridge ovirtmgmt already created [ INFO ] Stage: Environment packages setup [ INFO ] Stage: Programs detection [ INFO ] Stage: Environment setup [ INFO ] Stage: Environment customization
1 2 3 4 |
|
1 |
|
[ INFO ] Installing on first host Please provide storage domain name. [hosted_storage]: Local storage datacenter name is an internal name and currently will not be shown in engine's admin UI.Please enter local datacenter name [hosted_datacenter]:
1 2 3 4 5 6 7 8 9 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
[WARNING] Failed to resolve ha.lofyer.org using DNS, it can be resolved only locally Please provide the name of the SMTP server through which we will send notifications [localhost]: Please provide the TCP port number of the SMTP server [25]: Please provide the email address from which notifications will be sent [root@localhost]: Please provide a comma-separated list of email addresses which will get notifications [root@localhost]: [ INFO ] Stage: Setup validation
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
[ INFO ] Generating answer file '/etc/ovirt-hosted-engine/answers.conf' [ INFO ] Stage: Transaction setup [ INFO ] Stage: Misc configuration [ INFO ] Stage: Package installation [ INFO ] Stage: Misc configuration [ INFO ] Configuring libvirt [ INFO ] Configuring VDSM [ INFO ] Starting vdsmd [ INFO ] Waiting for VDSM hardware info [ INFO ] Waiting for VDSM hardware info [ INFO ] Waiting for VDSM hardware info [ INFO ] Waiting for VDSM hardware info [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ INFO ] Initializing sanlock lockspace [ INFO ] Initializing sanlock metadata [ INFO ] Creating VM Image [ INFO ] Disconnecting Storage Pool [ INFO ] Start monitoring domain [ INFO ] Configuring VM [ INFO ] Updating hosted-engine configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up The following network ports should be opened: tcp:5900 tcp:5901 udp:5900 udp:5901 An example of the required configuration for iptables can be found at: /etc/ovirt-hosted-engine/iptables.example In order to configure firewalld, copy the files from /etc/ovirt-hosted-engine/firewalld to /etc/firewalld/services and execute the following commands: firewall-cmd -service hosted-console [ INFO ] Creating VM You can now connect to the VM with the following command: /usr/bin/remote-viewer vnc://localhost:5900 Use temporary password "2067OGHU" to connect to vnc console. Please note that in order to use remote-viewer you need to be able to run graphical applications. This means that if you are using ssh you have to supply the -Y flag (enables trusted X11 forwarding). Otherwise you can run the command from a terminal in your preferred desktop environment. If you cannot run graphical applications you can connect to the graphic console from another host or connect to the console using the following command: virsh -c qemu+tls://192.168.1.150/system console HostedEngine If you need to reboot the VM you will need to start it manually using the command: hosted-engine --vm-start You can then set a temporary password using the command: hosted-engine --add-console-password The VM has been started. Install the OS and shut down or reboot it. To continue please make a selection:
1 2 3 |
|
1 2 |
|
[ INFO ] Creating VM You can now connect to the VM with the following command: /usr/bin/remote-viewer vnc://localhost:5900 Use temporary password "2067OGHU" to connect to vnc console. Please note that in order to use remote-viewer you need to be able to run graphical applications. This means that if you are using ssh you have to supply the -Y flag (enables trusted X11 forwarding). Otherwise you can run the command from a terminal in your preferred desktop environment. If you cannot run graphical applications you can connect to the graphic console from another host or connect to the console using the following command: virsh -c qemu+tls://192.168.1.150/system console HostedEngine If you need to reboot the VM you will need to start it manually using the command: hosted-engine --vm-start You can then set a temporary password using the command: hosted-engine --add-console-password Please install and setup the engine in the VM. You may also be interested in installing ovirt-guest-agent-common package in the VM. To continue make a selection from the options below: (1) Continue setup - engine installation is complete (2) Power off and restart the VM (3) Abort setup
1 |
|
[ INFO ] Engine replied: DB Up!Welcome to Health Status! [ INFO ] Waiting for the host to become operational in the engine. This may take several minutes... [ INFO ] Still waiting for VDSM host to become operational... [ INFO ] The VDSM Host is now operational Please shutdown the VM allowing the system to launch it as a monitored service.
1 |
|
[ INFO ] Enabling and starting HA services Hosted Engine successfully set up [ INFO ] Stage: Clean up [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination
此时,运行"hosted-engine --vm-start"启动engine。
Welcome to the core!
云计算目前主流实现有SaaS(Software-as-a-service)、PaaS(Platform-as-a-service)和IaaS(Infrastructure-as-a-service)。IaaS和PaaS都算作基础件,SaaS可以与基础件自由组合或者单独使用。
图
虚拟化技术已经很受重视而且被推到了一个浪尖。如今诸多开源虚拟化平台,比如XenServer、CloudStack、OpenStack、Eucalyptus、oVirt、OpenVZ等,我们都看花了眼,些许慌乱不知哪个适合自己了。
各平台实现方式:全虚拟化,半虚拟化,TBD
我在写这篇文档的时候,只对oVirt略知一二,其他平台(XenServer、OpenStack)稍懂皮毛,再剩下的我就没怎么用过了。那么,只挑最熟悉的来讲吧。
IaaS云计算平台,综合来说具有以下特性:
虚拟化
虚拟化作为云计算平台的核心,是资源利用的主要形式之一。网络、存储、CPU乃至GPU等主要通过虚拟主机进行实体化。
分布式
分布式可利用共享的存储,通过网络将资源进行整合,是实现资源化的必备条件。
高可用
对于规模庞大的云平台,提供管理节点、重要服务的高度可用性是十分必要的。笔者在写这篇文章时,oVirt 3.4已经可以做到管理节点的高度可用。
兼容性
云计算平台众多,各家有各家的特点,同一数据中心部署不同的平台的可能性极大,因此,主要服务(比如虚拟网络、存储、虚机等)要有一定兼容性,比如oVirt可以利用OpenStack的Nouveau提供的虚拟网络、Foreman可以方便地在oVirt上部署新机器等。
另外,也有DeltaCloud、libvirt等API,用户可以利用它们自由地实现自己的云综合管理工具。
资源池化
网络、存储、CPU或者GPU可以综合或者单独划分资源池,通过配额进行分配,从而保证其合理利用。
安全性
现代企业对于安全性的要求已经十分苛刻,除去传统数据加密、访问控制,甚至对于社会工程也要有一定防护能力;用户数据具有对非企业管理员具有防护性能,即使将虚拟机磁盘文件拷贝出来也不能直接获取其内容。
需求导向性
在计算水平上,优质资源最先提供给重要服务;服务水平上,平台具有可定制能力。
管理独占一台物理机
高可用管理机
第11(6-2)搭建管理引擎; 第12(6-3)搭建高可用管理引擎; 第13(6-4)加入节点,构建一个完整的云平台; 第14(6-5)应用进阶;
glusterfs在创建卷时会更改砖块所有者为root.root,对于某些应用请注意更改砖块目录所有者(比如在_/etc/rc.local_中添加chown,不要更改砖块下隐藏目录.glusterfs)。
前文所述的砖块划分方式在某些场景下不是很好,可以在/brickX下创建目录,比如data1,同时在创建glusterfs卷的时候使用HOST:/brickX/data1作为砖块,以合理利用存储空间。
砖块的划分排序:striped(normal)优先,replica在striped(normal)基础上做冗余;计算大小时,同一replica组中的brick进行合并(多个算作一个),一个striped组可看做一个有效块,。 假设我们有4个主机,8个砖块,每个砖块都是5GB,如下图,
创建卷时,使用如下命令:
则会进行下列组合:
创建卷时,使用如下命令:
则会进行下列组合(注意颜色,可改为实虚线):
由于glusterfs占用了2049端口,所以其与nfs server一般不能共存于同一台服务器,除非更改nfs服务端口。
mount -t nfs -o vers=3 server1:/volume1 /mnt
先在某一服务器或者客户端将起挂载,再以cifs方式导出
[glustertest] comment = For testing a Gluster volume exported through CIFS path = /mnt/glusterfs read only = no guest ok = yes
裂脑发生以后,各节点信息可能会出现不一致。可以通过以下步骤查看并修复。
定位裂脑文件 通过命令
或者查看在客户端仍然是Input/Output错误的文件。 关闭已经打开的文件或者虚机 确定正确副本 恢复扩展属性
当一个volume正在使用时,你删除了其中一个brick,会出现“/bricks/app or a prefix of it is already part of a volume”,对于3.3版本以后的glusterfs有此问题。 解决方法:
对于中等以上规模的部署,需要使用dns服务器去解析各个节点以免去修改hosts文件的麻烦。
既然要搭建一个稳健的基础,那么glusterfs在此使用distributed striped replicated方式,这里使用4台预装CentOS 6(SELINUX设置为permissive)的机器。
鉴于笔者所在环境中暂时没有配置独立的DNS,此处先修改hosts文件以完成配置(每台机器上都如此设置):
[epel] name=Extra Packages for Enterprise Linux 6 - $basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch failovermethod=priority enabled=1 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
[glusterfs-epel] name=GlusterFS is a clustered file-system capable of scaling to several petabytes. baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-\(releasever/\)basearch/ enabled=1 skip_if_unavailable=1 gpgcheck=1 gpgkey=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/pub.key
[glusterfs-noarch-epel] name=GlusterFS is a clustered file-system capable of scaling to several petabytes. baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/noarch enabled=1 skip_if_unavailable=1 gpgcheck=1 gpgkey=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/pub.key
[glusterfs-source-epel] name=GlusterFS is a clustered file-system capable of scaling to several petabytes. - Source baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/SRPMS enabled=0 skip_if_unavailable=1 gpgcheck=1 gpgkey=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/pub.key
每一个节点都可以看做gluster server,安装xfs用户空间工具:
假如每台机器除系统盘之外都有2块1T SATA硬盘。 对其进行分区,创建逻辑卷,格式化并挂载:
n p 1
w EOF
直接使用物理盘:
或者使用逻辑卷:
为什么要用XFS? XFS具有元数据日志功能,可以快速恢复数据;同时,可以在线扩容及碎片整理。其他文件系统比如EXT3,EXT4未做充分测试。
在其中任何一台机器上,比如gs2.example.com,执行:
peer probe gs1.example.com peer probe gs2.example.com
添加brick至volume,合理调整砖块顺序:
volume create gluster-vol1 stripe 2 replica 2 gs1.example.com:/gluster_brick0 gs1.example.com:/gluster_brick1 gs2.example.com:/gluster_brick0 gs2.example.com:/gluster_brick1 gs3.example.com:/gluster_brick0 gs3.example.com:/gluster_brick1 gs4.example.com:/gluster_brick0 gs4.example.com:/gluster_brick1 force volume start gluster-vol1 volume status Status of volume: gluster-vol1 Gluster process Port Online Pid
Brick gs1.example.com:/gluster_brick0 49152 Y 1984 Brick gs1.example.com:/gluster_brick1 49153 Y 1995 Brick gs2.example.com:/gluster_brick0 49152 Y 1972 Brick gs2.example.com:/gluster_brick1 49153 Y 1983 Brick gs3.example.com:/gluster_brick0 49152 Y 1961 Brick gs3.example.com:/gluster_brick1 49153 Y 1972 Brick gs4.example.com:/gluster_brick0 49152 Y 1975 Brick gs4.example.com:/gluster_brick1 49153 Y 1986 NFS Server on localhost 2049 Y 1999 Self-heal Daemon on localhost N/A Y 2006 NFS Server on gs2.example.com 2049 Y 2007 Self-heal Daemon on gs2.example.com N/A Y 2014 NFS Server on gs2.example.com 2049 Y 1995 Self-heal Daemon on gs2.example.com N/A Y 2002 NFS Server on gs3.example.com 2049 Y 1986 Self-heal Daemon on gs3.example.com N/A Y 1993
There are no active volume tasks
volume info all gluster volume info all
Volume Name: gluster-vol1 Type: Distributed-Striped-Replicate Volume ID: bc8e102c-2b35-4748-ab71-7cf96ce083f3 Status: Started Number of Bricks: 2 x 2 x 2 = 8 Transport-type: tcp Bricks: Brick1: gs1.example.com:/gluster_brick0 Brick2: gs1.example.com:/gluster_brick1 Brick3: gs2.example.com:/gluster_brick0 Brick4: gs2.example.com:/gluster_brick1 Brick5: gs3.example.com:/gluster_brick0 Brick6: gs3.example.com:/gluster_brick1 Brick7: gs4.example.com:/gluster_brick0 Brick8: gs4.example.com:/gluster_brick1
当用glusterfs-fuse挂载时,客户端的hosts文件里需要有gluster server中的任一节点做解析:
127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6
192.168.1.81 gs1.example.com
安装glusterfuse,将gluster卷作为glusterfs挂载,并写入1M文件查看其在各砖块分配:
在四台服务端分别查看:
[root@gs1 ~]# ls -lh /gluster_brick* /gluster_brick0: total 1.0M -rw-r--r--. 2 root root 512K Apr 22 17:13 a.img -rw-r--r--. 2 root root 512K Apr 22 17:13 d.img /gluster_brick1: total 1.0M -rw-r--r--. 2 root root 512K Apr 22 17:13 a.img -rw-r--r--. 2 root root 512K Apr 22 17:13 d.img
[root@gs2 ~]# ls -lh /gluster_brick* /gluster_brick0: total 1.0M -rw-r--r--. 2 root root 512K Apr 22 17:13 a.img -rw-r--r--. 2 root root 512K Apr 22 17:13 d.img /gluster_brick1: total 1.0M -rw-r--r--. 2 root root 512K Apr 22 17:13 a.img -rw-r--r--. 2 root root 512K Apr 22 17:13 d.img
[root@gs3 ~]# ls -lh /gluster_brick* /gluster_brick0: total 1.0M -rw-r--r--. 2 root root 512K Apr 22 17:13 b.img -rw-r--r--. 2 root root 512K Apr 22 17:13 c.img /gluster_brick1: total 1.0M -rw-r--r--. 2 root root 512K Apr 22 17:13 b.img -rw-r--r--. 2 root root 512K Apr 22 17:13 c.img
[root@gs4 ~]# ls -lh /gluster_brick* /gluster_brick0: total 1.0M -rw-r--r--. 2 root root 512K Apr 22 17:13 b.img -rw-r--r--. 2 root root 512K Apr 22 17:13 c.img /gluster_brick1: total 1.0M -rw-r--r--. 2 root root 512K Apr 22 17:13 b.img -rw-r--r--. 2 root root 512K Apr 22 17:13 c.img
至此,所有配置结束,下一篇说一下使用以及部分trick。
既然是文件系统,那就不得不了解一下FUSE。
FUSE全名Filesystem in Userspace,是在类UNIX系统下的一个机制,可以让普通用户创建修改访问文件系统。功能大体就是连接内核接口与用户控件程序的一座“桥”,目前普遍存在于多个操作系统中,比如Linux、BSD、Solaris、OSX、Android等。
FUSE来源于AVFS,不同于传统文件系统从磁盘读写数据,FUSE在文件系统或磁盘中有“转换”的角色,本身并不会存储数据。
在Linux系统中的实现有很多,比如即将要用到的glusterfs-fuse。