跳转至

cloud-infra

ZStack笔记

ZStack与Neutron集成的探索(2017-07)

这篇文章较之前的ZStack+Neutron更为简单,由于ZStack的vxlan实现没有使用ovs,所以。。我不管了,直接ovs吧。另外,你如果使用了vyos,我没研究,可以等到下一期处理。

以下方法仅仅从实现角度出发,理应适用于其他平台,如果是正经的集成,请自己编写模块。

环境:单台主机有两个网卡,相同网段(纯粹方便使用而搞,不要学习),ZStack已经安装,其中eno16780032为管理网络,eno33561344为ZStack二层网络接口,三层网络novlan。

集成OVS/ODL

  1. 于计算节点中安装openvswitch。

wget https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-3.noarch.rpm rpm -i rdo-release-ocata-3.noarch.rpm yum --enablerepo=* install -y openvswitch

yum --enablerepo=* install -y openvswitch-ovn*

systemctl enable openvswitch systemctl start openvswitch

  1. 接下来我给你解释一下,要怎么做了。 首先,由于ZStack的flat网络使用了network namespace,通过一对pair将其与linuxbridge相连,ns内的叫inner0,外部的叫outer0;其中ns中会有dnsmasq启动的dhcp服务,我们将ns看作一个带dhcp的功能交换机好了,仅仅提供dhcp能力。

然后,我们要创建一个ovs,为了将虚拟机的网口vnic1.0能够通过ovs进行控制,同时与dhcp交换机接通,我们需要一根线将ovs与linuxbridge接起来,如此一来,看下图(幸苦画的ASCII图没了!)。

创建ovs

ovs-vsctl add-br ovs-br0 ip link set dev ovs-br0 up

创建接口对

ip link add name veth0 type veth peer name veth0p

开始扎物理网线

brctl delif br_eno16780032 eno16780032 ovs-vsctl add-port ovs-br0 br_eno16780032

扎接口对

brctl addif br_eno16780032 veth0 ovs-vsctl add-port ovs-br0 veth0p ip link set dev veth0 up ip link set dev veth0p up

最后由于我们的虚拟机接口vnic1.0每次都会创建到linuxbridge上,所以我们要把它拔下来插ovs上从而可以进行流表控制(可以保存为脚本)。

brctl delif br_eno16780032 vnic1.0 ovs-vsctl add-port ovs-br0 vnic1.0

Q: 为啥要把物理网口eno33561344放到ovs里? A: 为了保持纯粹 :)

Q: 为啥不把outer0直接放到ovs里? A: ZStack逻辑每次创建虚拟机都会试图把outer0加到linuxbridge里,如果已经加到其他网桥虚拟机创建会失败。

最后,把物理机的OVS实例交给ODL处理吧,不画图了。

ovs-vsctl set-manager tcp:opendaylight_ip:6640

也可以单独设置网桥的控制器,如果不想全被控制的话

ovs-vsctl set-controller ovs-br0 tcp:controller_ip:6633

安装ODL的插件

cd ODL_DIR ./bin/bash ./bin/client

安装netvirt与dlux界面,安装netvirt与yang

opendaylight-user@root> feature:install odl-netvirt-openstack odl-dlux-all odl-dlux-yangman odl-mdsal-apidocs odl-netvirt-ui

物理机关机了或者新建主机了咋办? 自己写网络配置文件、手撸脚本,方便的话可以进ZStack公司报名学习。

集成DPDK

为什么要做这件事儿呢?因为Intel说将DPDK与OVS结合会极大提升传输效率,https://download.01.org/packet-processing/ONPS2.1/Intel_ONP_Release_2.1_Performance_Test_Report_Rev1.0.pdf

OK,那就开始换模块插网线吧,如果对DPDK不熟悉的同学可以去http://dpdk.org扫一眼tutorial。

首先修改grub文件,在kernel启动参数中加入如下内容,尺寸自己酌情添加。

iommu=pt intel_iommu=on hugepages=16 hugepagesz=2M hugepages=2048 iommu=pt intel_iommu=on isolcpus=0-3

将上述内容加入到/etc/default/grub的CMDLINE后,执行“grub2-mkconfig > /boot/grub2/grub.cfg”并重启。

然后启用CentOS的Extra源并安装DPDK,这步操作较DPDK刚出来的时候已经良心很多了。。

yum install dpdk dpdk-tools driverctl

将网卡与vfio_pci绑定,没错,Intel的万兆卡。

modprobe vfio_pci

driverctl -v list-devices|grep Ether 0000:02:00.0 XXX driverctl set-override 0000:02:00.0 vfio-pci

在OVS中配置使用dpdk,这里我们创建另外一个OVS桥。

ovs-vsctl add-br ovs-br1

ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true

ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="1024,0"

ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0xf

ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0xf

systemctl restart openvswitch

ovs-vsctl list Open_vSwitch [dpdk, dpdkr, dpdkvhostuser, dpdkvhostuserclient, geneve, gre, internal, ipsec_gre, lisp, patch, stt, system, tap, vxlan]

然后,创建ovs-br1并塞俩vhost接口。

ovs-vsctl add-br ovs-br1 -- set bridge ovs-br1 datapath_type=netdev ovs-vsctl add-port ovs-br1 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser ovs-vsctl add-port ovs-br1 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser

到这里,工作基本完成,然后修改ZStack计算节点的虚拟机定义,目的是添加以下参数。

-chardev socket,id=char1,path=/run/openvswitch/vhost-user1 \ -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \ -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 \ -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc

怎么加呢?先virsh shutdown 1,然后virsh edit 1,并找到合适的地方加如下内容(开启平台设置中的NUMA选项)。

...

...

最后virsh start 1。

然后同样方法启动第2台虚拟机,在虚拟机里尝试 iperf,对比一下数据,整体来说都会获得些许提升以榨干网卡能力。

注意:以上内容仅仅是为了好玩,不要轻易在自己的ZStack生产环境中测试!

参考: https://software.intel.com/en-us/articles/set-up-open-vswitch-with-dpdk-on-ubuntu-server https://www.ovirt.org/blog/2017/09/ovs-dpdk/ https://libvirt.org/formatdomain.html


title: "ZStack Glusterfs 超融合方案" date: 2017-07-14 categories: - "cloud-infra" - "draft"


ZStack 2.1版本中已经支持了NFS作为MNHA节点的存储,那么我们可以按照之前glusterfs组件oVirt超融合的方式直接使用,过程如下。

实验环境 服务器3台,双千兆网口,均已安装ZStack专家模式操作系统。

部署三节点无条带双副本分布式glusterfs存储 节点安装ZStack计算节点后,分别在三台上使用如下命令安装glusterfs。

部署ZStack HA管理节点 将glusterfs的路径分别挂载至/zstack/glusterfs/mnha/,但要注意NFS/glusterfs的IP,因为glusterfs到服务端的连接不同于

title: "ZStack OpenStack API wrapper" date: 2017-09-14 categories: - "cloud-infra"


This is a flag.

title: "在ZStack中集成OpenStack Neutron组件" date: 2017-06-26 categories: - "cloud-infra" - "draft" tags: - "Cloud Computing" - "TBD"


以笔者目前对ZStack源码的掌握,并不能较为产品化地集成Neutron,所以只能用点稍微hack的技巧将其用起来。

实验材料:ZStack单机版,OpenStack Neutron with Dashboard and OVS bridge

实验目的:通过修改ZStack实例的开机xml(或者新建主机时修改网络为openvswitch bridge),调用Neutron API,并将实例网络桥接至OVS网桥。

实验步骤:

  1. 搭建ZStack,略。

  2. 搭建OpenStack Neutron实例,参考脚本https://...

  3. 编写hook脚本

  4. 开机测试

  5. 通过OpenStack Dashboard查看

实验思考:

这就是KVM平台的好处,互操作性杠杠的。另外,Neutron可作为VM Appliance单独提供,加上Cloud-Init就更好了。

实验过程:

1. 集成oVirt与Neutron

2. 集成ZStack与Neutron

3. Neutron与其产品的集成

参考链接:

http://www.ovirt.org/develop/release-management/features/cloud/neutronvirtualappliance/

TBD

在LinuxBridge/OVS中使用VxLAN组网以及创建VTEP

这是一篇入门文章,帮助初学者理清VxLAN的基本原理与使用,ovs只是工具,新版本内核也可使用ip命令直接创建。

本篇内容分为两篇,第一篇是使用简单VxLAN通道网络,第二篇会接入OVS模拟的VTEP设备。

一、使用VxLAN通道

原理是在网络命名空间上(仅测试环境),创建对端接口(peer/patch,虚拟化环境中即是虚拟机veth设备接口与OVS tun接口),以通过VxLAN通道与彼此通信。

host1拥有物理接口eth0(192.168.0.101),host2拥有物理接口eth0(192.168.0.102),两者在同一局域网中。

实验拓扑如下图。

在host1上创建veth与对端接口,对端接口会与ovs网桥相连,其中veth1代表虚拟机接口(地址为10.0.0.1),veth1p代表与ovs网桥相连的接口。

添加网络命名空间

ip netns add ns-host1

添加对端接口

ip link add name veth1 type veth peer name veth1p

将虚拟机接口放入命名空间

ip link set dev veth1 netns ns-host1

设置虚拟机接口IP

ip netns exec ns-host1 ifconfig veth1 10.0.0.1/24 up

添加ovs网桥

ovs-vsctl add-br ovs-vxlan

将虚拟机的对端接口放入命名空间

ovs-vsctl add-port ovs-vxlan veth1p

激活接口

ip link set ovs-vxlan up ip link set veth1p up

同样在host2上创建。

ip netns add ns-host2 ip link add name veth1 type veth peer name veth1p ip link set dev veth1 netns ns-host2 ip netns exec ns-host2 ifconfig veth1 10.0.0.2/24 up

ovs-vsctl add-br ovs-vxlan ovs-vsctl add-port ovs-vxlan veth1p ip link set ovs-vxlan up ip link set veth1p up

然后,分别在host1与host2上创建VxLAN通道。

host1,将VxLAN的对端指向host2的eth0,VNI(VXLAN Network Identifier)为123。

ovs-vsctl add-port ovs-vxlan vxlan0 -- set interface vxlan0 type=vxlan options:remote_ip=192.168.0.102 options:key=123

host2,将VxLAN的对端指向host2的eth0。

ovs-vsctl add-port ovs-vxlan vxlan0 -- set interface vxlan0 type=vxlan options:remote_ip=192.168.0.101 options:key=123

这样即可完成最简单的OVS VxLAN实验准备,在host2上的虚拟机尝试ping host1上的虚拟机。

ip netns exec ns-host2 ping 10.0.0.1 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=1.74 ms 64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.734 ms 64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=0.669 ms

这里可以将主机上的物理接口,比如eth1加入到ovs-vxlan中,从而使得与其相连的主机或者网络设备能够接入此VxLAN网络; 当添加第三台主机时,使用gre网络需要在每个gre0中设置remote_ip以两两相连,可以是星形或者环形(打开ovs生成树协议,ovs-vsctl set bridge ovs-gre stp_enable=true),而VxLAN网络

二、使用VxLAN通道连接虚拟机与物理机

三、接入OVS VTEP设备

参考: brctl与bridge命令对比 在oVirt中使用ovs gre网络 搭建基于Open vSwitch的VxLAN隧道实验 Connecting VMs Using Tunnels (Userspace)

虚拟化平台镜像去冗测试(opendedup)

OpenDedup,是一款开源去重文件系统,https://github.com/opendedup/sdfs,可以分布式,支持NFS、iSCSI等,感觉非常厉害,作者是Veritas的银堡(Sam Silverberg)。

Note

RedHat 7就开始带了VDO,但是还没测过。

可以去官网下载镜像,或者是笔者认为更加实用的NAS系统。

简单测试

目的是减少更多本地环境占用,使用opendedup测试。

  1. 首先测试我新闻服务器上的数据,以索引文件为主,经常变动,有大有小。

  2. 然后选择一款kvm软件平台。

安装:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
wget http://www.opendedup.org/downloads/sdfs-latest.deb
sudo dpkg -i sdfs-latest.deb

sudo su
echo "* hardnofile 65535" >> /etc/security/limits.conf
echo "* soft nofile 65535" >> /etc/security/limits.conf
exit
sudo mkfs.sdfs --volume-name=pool0 --volume-capacity=256GB
sudo mkdir /media/pool0 
sudo mount.sdfs pool0 /media/pool0/

数据拷贝进去后,原9.3GB索引数据降为8.5GB,效果不如想象中好。

可能原因:

sdfs提供了丰富的命令行选项,我没有使用,使用后可能会达到预期,比如更小的block。

OpenStack、OpenDayLight、硬件SDN交换机集成

本文将讲述如何在OpenStack中使用SDN交换机,同时将OpenDayLight作为控制器,测试网络类型为vxlan。

本文适用目标为OpenStack Neutron组件,但理论上说包括任何可使用Neutron组件服务的云管理平台,比如oVirt。

在搭建Neutron时,笔者使用了现成的OpenStack单节点平台(可以参考https://github.com/lofyer/openstack-pxe-deployment,为防止出现3.1节末的错误,请使用Newton之前版本,我手贱部署了最新的),但为了减少搭建部署的麻烦,我会在后期尝试直接使用Neutron VM Appliance,方便集成(暂不确定是否影响性能)。

update 2017-05-27: 当前版本zstack的云路由功能vyos所实现的nfv软路由,笔者会考虑将Neutron移植到ZStack中,如此以来SDN即可将注意力主要放在Neutron。

update 2017-06-01: 加入盛科交换机,4.10内核的vtep可直接内核实现,参考Linux doc

update 2017-06-07: OpenStack版本从Ocata切换为Newton,实验3.3重新做了。

update 2017-06-12: 使用 Neutron appliance,并添加hooks,构建ZStack SDN解决方案,来源可自行搜索oVirt的相关资料

1. 环境准备

Pica8 P-5101 48+4*8 10Gb SDN交换机2台(吵地脑袋疼),OpenStack Ocata,OpenDayLight Lastest,管理机器。

mininet虚拟机地址:192.168.0.68

操作机地址:192.168.0.55

2. 架构方案与mininet环境模拟实验

拓扑使用miniedit制作,同时首选项中打开“Start CLI”,操作机“ssh -Y mininet@MININET_IP”后,运行“sudo ~/mininet/examples/mininet.py”以开启。

2.1. 基础模拟实验——交换机互联

文字描述可参考http://blog.scottlowe.org/2012/11/27/connecting-ovs-bridges-with-patch-ports/以及其中链接。

miniedit拓扑:

目的:连接两个交换机,使得h1与h3互通。

方法:将两个交换机上的一个端口组组成patch。

ovs-vsctl -- add-port s1 patch1 -- set interface patch1 type=patch options:peer=patch2 -- add-port s2 patch2 -- set interface patch2 type=patch options:peer=patch1

结果:

root@mininet-vm:/home/mininet# ovs-vsctl show 918037ec-b307-45d7-a75a-f1ac4337d135 Bridge "s1" Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "s1" Interface "s1" type: internal Port "s1-eth2" Interface "s1-eth2" Port "s1-eth1" Interface "s1-eth1" Port "patch1" Interface "patch1" type: patch options: {peer="patch2"} Bridge "s2" Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "s2-eth1" Interface "s2-eth1" Port "s2" Interface "s2" type: internal Port "patch2" Interface "patch2" type: patch options: {peer="patch1"} ovs_version: "2.0.2"

2.2. 进阶实验——流表

miniedit拓扑:同上。

目的:阻断h1与h2、h2与h3,使得h1与h3互通。

方法:下发如下流表。

阻断h2与h1/h3的协议封包: ovs-ofctl add-flow s1 in_port=2,arp,ip,nw_dst=10.0.0.1,actions=drop ovs-ofctl add-flow s1 in_port=2,arp,ip,nw_dst=10.0.0.3,actions=drop

结果

mininet> h1 ping h2 -c 2 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. ^C --- 10.0.0.2 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1006ms

mininet> h2 ping h1 -c 2 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. ^C --- 10.0.0.1 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1000ms

mininet> h3 ping h2 -c 2 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. ^C --- 10.0.0.2 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 999ms

mininet> h2 ping h3 -c 2 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. ^C --- 10.0.0.3 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1006ms

mininet> h1 ping h3 -c 2 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=2.37 ms 64 bytes from 10.0.0.3: icmp_seq=2 ttl=64 time=0.561 ms

--- 10.0.0.3 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.561/1.466/2.372/0.906 ms mininet> h3 ping h1 -c 2 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=2.49 ms 64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.548 ms

--- 10.0.0.1 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.548/1.521/2.494/0.973 ms mininet>

root@mininet-vm:~# ovs-ofctl dump-flows s1 NXST_FLOW reply (xid=0x4): cookie=0x0, duration=485.308s, table=0, n_packets=0, n_bytes=0, idle_age=485, ip,in_port=2,nw_dst=10.0.0.1 actions=drop cookie=0x0, duration=483.569s, table=0, n_packets=3, n_bytes=294, idle_age=183, ip,in_port=2,nw_dst=10.0.0.3 actions=drop cookie=0x0, duration=1715.894s, table=0, n_packets=93, n_bytes=3906, idle_age=202, arp,in_port=1,arp_tpa=10.0.0.2 actions=drop cookie=0x0, duration=715.43s, table=0, n_packets=4, n_bytes=392, idle_age=400, ip,in_port=1,nw_dst=10.0.0.2 actions=drop

3. 综合实验环境配置

首先来看一下实验拓扑:

图。。。。。。

SDN交换机 管理地址:192.168.0.250/24,

OpenStack Controller/Neutron IP: 192.168.0.80

OpenDayLight IP: 192.168.0.90 Web: http://192.168.0.90:8181/index.html

3.1. 设置OpenDayLight

在主机192.168.0.90上下载启动Boron版本OpenDayLight(其他版本未测试),然后解压运行(需要Java环境):

unzip distribution-karaf-0.5.3-Boron-SR3.zip cd distribution-karaf-0.5.3-Boron-SR3

启动ODL服务

./bin/start

本地安装并启动OpenvSwitch以作测试

yum localinstall -y https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-3.noarch.rpm systemctl enable openvswitch systemctl start openvswitch ovs-vsctl set-manager tcp:192.168.0.90:6640

进入客户端

./bin/client

安装netvirt与dlux界面,其中yangman为包管理器UI

opendaylight-user@root> feature:install odl-netvirt-openstack odl-dlux-all odl-dlux-yangman odl-mdsal-apidocs odl-netvirt-ui

然后访问http://192.168.0.90:8181/index.html,用户密码为admin/admin,可以看到我们刚刚添加的本地OVS。

3.3. OpenStack与OpenDayLight集成

首先要清除原有的实例和neutron网络。

实例

openstack server list +--------------------------------------+-----------+---------+------------+-------------+------------------------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-----------+---------+------------+-------------+------------------------------------------------------+ | d45b1646-b559-4d6d-963c-af3da205aa36 | instance3 | SHUTOFF | - | Shutdown | flat-ens39=192.168.0.108; sharednet1=192.168.118.203 | +--------------------------------------+-----------+---------+------------+-------------+------------------------------------------------------+

openstack server delete instance3

网络

openstack subnet list +--------------------------------------+-------------+--------------------------------------+------------------+ | ID | Name | Network | Subnet | +--------------------------------------+-------------+--------------------------------------+------------------+ | b0acb3ad-22d4-491c-94e7-488aca906398 | flat-subnet | b8f95552-9a45-49ae-b080-2e0041d4b2a0 | 192.168.0.0/24 | | c066992d-294d-4cd6-a3fb-0386619922c7 | subnet1 | 5ac25e6f-bf49-4217-abbc-11bf171c0a3e | 192.168.118.0/24 | +--------------------------------------+-------------+--------------------------------------+------------------+

openstack router list +------------------------------+-------------------+--------+-------+-------------+-------+------------------------------+ | ID | Name | Status | State | Distributed | HA | Project | +------------------------------+-------------------+--------+-------+-------------+-------+------------------------------+ | 78507b26-38f0-4b4a- | router-flat-ens39 | ACTIVE | UP | False | False | de2aea51161642759a687b5768b2 | | a3c6-13d15c4f9665 | | | | | | 3b7e | +------------------------------+-------------------+--------+-------+-------------+-------+------------------------------+ ........

删除所有port、router、subnet后,再用如下命令检查一遍,应该为空

openstack port list

接下来,我们将控制节点(neutron管理节点)与计算节点的OVS交于ODL管理。 首先停止相应服务。

计算节点,如果与控制节点相同则优先运行此部分内容

systemctl stop neutron-openvswitch-agent systemctl disable neutron-openvswitch-agent systemctl stop neutron-l3-agent systemctl disable neutron-l3-agent

Neutron控制节点

systemctl stop neutron-server systemctl stop neutron-l3-agent

然后清空OVS数据库。

systemctl stop openvswitch rm -rf /var/log/openvswitch/* rm -rf /etc/openvswitch/conf.db systemctl start openvswitch

清空后可看到如下

ovs-vsctl show bdf45776-c7c2-4df4-9911-007e89b67bbe ovs_version: "2.6.1"

将其交予ODL管理。

ovs-vsctl set-manager tcp:192.168.0.90:6640

然后在控制节点与计算节点设置用来vxlan通信的本地端,这里只有一个节点192.168.0.80。

ovs-vsctl set Open_vSwitch . other_config:local_ip=192.168.0.80

设置好以后可以在节点上看到ODL自动创建了一个连接到ODL控制器的br-int。

ovs-vsctl show bdf45776-c7c2-4df4-9911-007e89b67bbe Manager "tcp:192.168.0.90:6640" is_connected: true Bridge br-int Controller "tcp:192.168.0.90:6653" is_connected: true fail_mode: secure Port br-int Interface br-int type: internal ovs_version: "2.6.1"

ovs-vsctl get Open_vSwitch . other_config

此时reloadODL界面,可以看到多了个OVS。

接下来让neutron使用ODL,我们首先需要在neutron控制节点上安装一个对应包。

对于将ODL放置于OS控制节点的同学可以参考如下内容以修改swift端口:

First, ensure that port 8080 (which will be used by OpenDaylight to listen for REST calls) is available. By default, swift-proxy-service listens on the same port, and you may need to move it (to another port or another host), or disable that service. It can be moved to a different port (e.g. 8081) by editing /etc/swift/proxy-server.conf and /etc/cinder/cinder.conf, modifying iptables appropriately, and restarting swift-proxy-service. Alternatively, OpenDaylight can be configured to listen on a different port, by modifying the jetty.port property value in etc/jetty.conf.

yum install -y python-networking-odl

修改/etc/neutron/plugins/ml2/ml2_conf.ini并添加内容。

crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers opendaylight crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan

ml2_conf_odl.ini中的设置无效且报错,可能是我姿势不对,原因是我忘了修改neutron-server的启动命令。

cat <> /etc/neutron/plugins/ml2/ml2_conf.ini [ml2_odl] url = http://192.168.0.90:8080/controller/nb/v2/neutron password = admin username = admin EOF

然后修改服务后端为ODL。

crudini --set /etc/neutron/neutron.conf DEFAULT service_plugins odl-router crudini --set /etc/neutron/dhcp_agent.ini DEFAULT force_metadata True crudini --set /etc/neutron/dhcp_agent.ini ovs ovsdb_interface vsctl

重置neutron数据库并重启服务。

mysql -e "DROP DATABASE IF EXISTS neutron_ml2;" -uroot -p mysql -e "CREATE DATABASE neutron_ml2 CHARACTER SET utf8;" -uroot -p neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head systemctl start neutron-server

验证是否成功。

curl -u admin:admin http://192.168.0.90:8080/controller/nb/v2/neutron/networks { "networks" : [ ] }

最后在openstack中创建网络与实例。

neutron router-create router1 neutron net-create private neutron subnet-create private --name=private_subnet 10.10.5.0/24 neutron router-interface-add router1 private_subnet nova boot --flavor --image --nic net-id= test1 nova boot --flavor --image --nic net-id= test2

添加浮动IP。

ovs-vsctl set Open_vSwitch . other_config:provider_mappings=physnet1:eth1 neutron net-create public-net -- --router:external --is-default --provider:network_type=flat --provider:physical_network=physnet1 neutron subnet-create --allocation-pool start=10.10.10.2,end=10.10.10.254 --gateway 10.10.10.1 --name public-subnet public-net 10.10.0.0/16 -- --enable_dhcp=False neutron router-gateway-set router1 public-net

neutron floatingip-create public-net nova floating-ip-associate test1

在OpenStack O版中创建网络时ODL与OS分别出现如下错误。

ODL: 2017-06-03 11:27:10,768 | ERROR | pool-46-thread-1 | QosInterfaceStateChangeListener | 350 - org.opendaylight.netvirt.neutronvpn-impl - 0.3.3.Boron-SR3 | Qos:Exception caught in Interface Operational State Up event java.lang.IllegalArgumentException: Supplied value "tapd5e5b153-39" does not match required pattern "^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}\(" at com.google.common.base.Preconditions.checkArgument(Preconditions.java:145)[65:com.google.guava:18.0.0] at org.opendaylight.yang.gen.v1.urn.ietf.params.xml.ns.yang.ietf.yang.types.rev130715.Uuid.<init>(Uuid.java:55)[80:org.opendaylight.mdsal.model.ietf-yang-types-20130715:2013.7.15.9_3-Boron-SR3] at org.opendaylight.netvirt.neutronvpn.QosInterfaceStateChangeListener.add(QosInterfaceStateChangeListener.java:61)[350:org.opendaylight.netvirt.neutronvpn-impl:0.3.3.Boron-SR3] at org.opendaylight.netvirt.neutronvpn.QosInterfaceStateChangeListener.add(QosInterfaceStateChangeListener.java:27)[350:org.opendaylight.netvirt.neutronvpn-impl:0.3.3.Boron-SR3] at org.opendaylight.genius.datastoreutils.AsyncDataTreeChangeListenerBase\)DataTreeChangeHandler.run(AsyncDataTreeChangeListenerBase.java:136)[310:org.opendaylight.genius.mdsalutil-api:0.1.3.Boron-SR3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)[:1.8.0_131] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)[:1.8.0_131] at java.lang.Thread.run(Thread.java:748)[:1.8.0_131]

OS: ERROR neutron.plugins.ml2.managers [req-8174eb6e-6a01-428e-9c3d-d9f115dfa36b - - - - -] Failed to bind port d5e5b153-390a-4129-babd-7work': None, 'id': u'f01e69c5-096e-4d01-a239-5dbe37005e9b', 'network_type': u'vxlan'}]

原因可能是版本传参变了,ODL没及时更新,但我不会Java,会也不想去改ODL,所以我推荐读者用个老版本的Neutron,比如M版、N版啥的。

3.4. OVS创建vtep

IP: 192.168.0.101

3.5. OpenStack与SDN交换机集成

在ODL结束以后,我们再来试一下OpenStack与SDN交换机的集成,与上一节相互独立,即一个全新的OpenStack Newton环境。

由于商业法务问题,我不会将Pica8的OpenStack插件源码全部放出来,另外放出来意义也不大,现有SDN交换机的同学可以直接向厂商索要。

/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/

Pica8交换机工作在OVS模式下,需要将neutron server与OVS Manager相连(管理网口),或许有其他方法。

首先将插件文件拷贝至/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/,目录结构如下。

/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/pica8/ ├── config.pyc ├── db.pyc ├── exceptions.pyc ├── init.pyc ├── mechanism_pica8.pyc ├── ml2_conf_pica8.ini ├── rpc.pyc └── vtep.ovsschema

修改/usr/lib/python2.7/site-packages/neutron-9.3.1-py2.7.egg-info/entry_points.txt添加neutron驱动。

... [neutron.ml2.mechanism_drivers] ... pica8 = neutron.plugins.ml2.drivers.pica8.mechanism_pica8:Pica8Driver ...

创建/etc/neutron/plugins/ml2/ml2_conf_pica8.ini,并添加如下内容。

[ml2_pica8]

(ListOpt) List of other VTEPs' IP address, either a software vtep or

other vendor's hardware vtep.

Example: vtep_list=10.0.0.100,10.0.0.101,10.0.0.102

vtep_list=192.168.0.101

(IntOpt) Sync interval in seconds between Neutron plugin and PicaOS.

This field defines how often the synchronization is performed.

This is an optional field. If not set, a value of 180 seconds

is assumed.

sync_interval = 60

Example: sync_interval = 60

openstack_version = newton

Example: openstack_version = kilo

PicOS Switch configurations.

Each switch to be managed by Openstack Neutron must be configured here.

Format:

[ml2_mech_pica_switch:192.168.0.250] =:,,... =: # =:project_name,,vni, # ...

ovsdb_port=

Example:

[ml2_mech_pica_switch:192.168.0.250]

te-1/1/1=physnet1:dev-1,dev-2,dev-3

te-1/1/2=physnet2:dev-4

te-1/1/3=physnet1:project_name,admin,vni,3535

ovsdb_port=6640

source_ip=192.168.0.80

修改/usr/lib/systemd/system/neutron-server.service文件,在${DAEMON_ARGS}字段添加pica8的引导配置,需要停止neutron-server服务并systemctl daemon-reload。

--config-file=/etc/neutron/plugins/ml2/ml2_conf_pica8.ini

修改/etc/neutron/plugins/ml2/ml2_conf.ini,如下。

type_drivers = vlan,vxlan project_network_types = vxlan mechanism_drivers = pica8,openvswitch

Make sure the is consistent in the following configuration

bridge_mappings=:br-ex network_vlan_ranges = :1:4094

安装python-ovs库。

yum install -y python-pip pip install ovs

运行如下命令创建pica8插件数据库动作。

neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini revision -m "add pica8 mechanism driver" --expand

在生成的/usr/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/versions/newton/expand/94ca4fc9191b_add_pica8_mechanism_driver.py文件中添加如下内容。

def upgrade(): op.create_table( 'pica8_interfaces_v2', sa.Column('id', sa.String(length=36), nullable=False, primary_key=True), sa.Column('switch', sa.String(length=36), nullable=False), sa.Column('interface', sa.String(10), nullable=False) ) op.create_table( 'pica8_vlan_allocations_v2', sa.Column('id', sa.String(length=36), nullable=False, primary_key=True), sa.Column('project_id', sa.String(length=255), nullable=False), sa.Column('network_id', sa.String(length=36), nullable=False), sa.Column('segmentation_id', sa.Integer, nullable=False), sa.Column('vlan_id', sa.Integer, nullable=False), sa.Column('vm_reference', sa.Integer, nullable=False, default=0), sa.Column('interface_id', sa.String(length=36), nullable=True), sa.ForeignKeyConstraint(['interface_id'], ['pica8_interfaces_v2.id'], ondelete='CASCADE') ) def downgrade(): op.drop_table('pica8_interfaces') op.drop_table('pica8_vlan_allocations_v2')

执行如下命令升级数据库(如果失败请将最后参数替换为head)。

neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade 94ca4fc9191b

重启服务。

systemctl restart openstack-nova-api systemctl restart neutron-server

物理主机接入配置。

Edit the file /etc/neutron/plugins/ml2/ml2_conf_pica8.ini to contain the following content: =:project_name,,vni, is the interface of Pica8 switch connected to the physical host < project_name > is the name of the project owning the physical host is the vxlan id of the project network the physical host connects to

miniedit使用 Pica8手册 ODL手册 OpenDaylight with Openstack Guide OpenStack with NetVirt Installing OpenStack and OpenDaylight using DevStack https://wiki.opendaylight.org//blog/images/5/59/CloudIntegrationwithOpenStackOVSDBNetVirt.pdf

一种应用于云平台负载的PID非线性控制系统设计

本文的实现效率尚有待考证,极有可能沦为扯淡文,但如果在网络资源部分可以快速应用测试。

众所周知,很多计算机系统里的设计都可以描述为线性模型,但正如金融系统的发展,计算机系统直接面向大众以后,也会呈现出非线性的特征,比如典型的DDoS即是在系统设计之外。接下来,笔者将使用自控知识来设计一种自适应负载的云计算控制器,不仅适用于计算、也会适用于网络、存储等服务资源。

以OpenStack平台的计算(虚拟机)为例,当用户的计算需求被量化后,那么我们就能根据其需求直接给出相应数量的计算节点。假如用户的计算需求是变化的,其值为R,且我们的程序员也是个直肠子,给出相应的计算能力为C,那么他设计的程序很有可能就是这个公式:

N=10, R=nN, C=(n+2)N

每台虚拟机的计算能力N为10,虚拟机数量为n。

可以看出他给了两台的冗余量,啊哈,还不错。所以他期望的场景应该是这样的。

其中黄色为实际需求,蓝色为平台提供,绿色为虚拟机数量。

但是,假如需求呈现出短时间大量波动的话,比如下图。

这个时候事情就不是那么美妙了,平台在即时响应的同时,伴随着大量虚拟机的上线/下线,从而造成一定的资源请求拥堵,降低控制性能。

接下来,我们尝试引入PID回馈控制器,就是这个样子的。

PID的Python代码实现如下:

!/usr/bin/python

import time

class PID: def init(self, P=0.2, I=0.0, D=0.0):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
    self.Kp = P
    self.Ki = I
    self.Kd = D

    self.sample_time = 0.00
    self.current_time = time.time()
    self.last_time = self.current_time

    self.clear()

def clear(self):
    #Clears PID computations and coefficients
    self.SetPoint = 0.0

    self.PTerm = 0.0
    self.ITerm = 0.0
    self.DTerm = 0.0
    self.last_error = 0.0

    # Windup Guard
    self.int_error = 0.0
    self.windup_guard = 20.0

    self.output = 0.0

def update(self, feedback_value):
    # Calculates PID value for given reference feedback

    error = self.SetPoint - feedback_value

    self.current_time = time.time()
    delta_time = self.current_time - self.last_time
    delta_error = error - self.last_error

    if (delta_time >= self.sample_time):
        self.PTerm = self.Kp * error
        self.ITerm += error * delta_time

        if (self.ITerm < -self.windup_guard):
            self.ITerm = -self.windup_guard
        elif (self.ITerm > self.windup_guard):
            self.ITerm = self.windup_guard

        self.DTerm = 0.0
        if delta_time > 0:
            self.DTerm = delta_error / delta_time

        # Remember last time and last error for next calculation
        self.last_time = self.current_time
        self.last_error = error

        self.output = self.PTerm + (self.Ki * self.ITerm) + (self.Kd * self.DTerm)

def setKp(self, proportional_gain):
    # Determines how aggressively the PID reacts to the current error with setting Proportional Gain
    self.Kp = proportional_gain

def setKi(self, integral_gain):
    # Determines how aggressively the PID reacts to the current error with setting Integral Gain
    self.Ki = integral_gain

def setKd(self, derivative_gain):
    # Determines how aggressively the PID reacts to the current error with setting Derivative Gain
    self.Kd = derivative_gain

def setWindup(self, windup):
    # unwound
    self.windup_guard = windup

def setSampleTime(self, sample_time):
    # PID that should be updated at a regular interval.
    self.sample_time = sample_time

然后在上图条件下进行PID控制,代码如下:

import PID import time import matplotlib.pyplot as plt import numpy as np from scipy.interpolate import spline

def test_pid(P = 0.2, I = 0.0, D= 0.0, L=100): """Self-test PID class

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
.. note::
    ...
    for i in range(1, END):
        pid.update(feedback)
        output = pid.output
        if pid.SetPoint > 0:
            feedback += (output - (1/i))
        if i>9:
            pid.SetPoint = 1
        time.sleep(0.02)
    ---
"""
pid = PID.PID(P, I, D)
pid.clear()
pid.SetPoint=0.0
pid.setSampleTime(0.01)

END = L
feedback = 0

feedback_list = []
time_list = []
setpoint_list = []

for i in range(1, END):
    pid.update(feedback)
    output = pid.output
    if pid.SetPoint > 0:
        feedback += (output - (1/i))
    if i<10:
        pid.SetPoint = 20
    if i>20:
        pid.SetPoint = 50
    if i>22:
        pid.SetPoint = 100
    if i>24:
        pid.SetPoint = 10
    if i>28:
        pid.SetPoint = 200
    if i>30:
        pid.SetPoint = 200
    if i>70:
        pid.SetPoint = 20

    feedback_list.append(feedback)
    setpoint_list.append(pid.SetPoint)
    time_list.append(i)
    time.sleep(0.02)

time_sm = np.array(time_list)
time_smooth = np.linspace(time_sm.min(), time_sm.max(), 300)
feedback_smooth = spline(time_list, feedback_list, time_smooth)

plt.plot(time_smooth, feedback_smooth)
plt.plot(time_list, setpoint_list)
plt.xlim((0, L))
plt.ylim((min(feedback_list)-0.5, max(feedback_list)+0.5))
plt.xlabel('time (s)')
plt.ylabel('PID C-R')

plt.grid(True)
plt.show()

if name == "main": test_pid(1.01, 1, 0.001, L=100)

然后看看现在是什么样呢?

嗯,没错,多了一些调节量(超调量),且变化较之前平稳了一些。这些调节量是否适用于大批量的云计算环境还有待验证,但是以Web应用来看,这些调节量理应工作。

另外,考虑到虚拟机在创建后某些应用可能短时间内不接受下调,所以我们可以动态地调节C的值,即PID的输出仅用作参考。

Deploy Asterisk on CentOS

Get the latest packages. up to 2014-02-10

rpm -Uvh http://packages.asterisk.org/centos/6/current/x86_64/RPMS/asterisknow-version-3.0.1-2_centos6.noarch.rpm

yum install asterisk asterisk-configs --enablerepo=asterisk-12

yum install dahdi-linux dahdi-tools libpri

chkconfig dahdi on

chkconfig asterisk on

service dahdi start

service asterisk start

You can use freepbx on http://localhost .

yum install freepbx


title: "use MAXS to control your device via ejabberd(plus ssh, jingle voice talk as a bonus)" date: 2014-02-26 categories: - "cloud-infra"


Let's see what we have got here: A xmpp server based on ejabberd on my host: lofyer.org. Windows client: Jitsi(Recommended), Pidgin. (Optional)A Android client: Xabber. MAXS on my Nexus 5 Android phone.

1. Prepare the server(Debian 7)

apt-get install ejabberd

cd /etc/ejabberd/; wget http://people.collabora.com/~robot101/olpc-ejabberd/ejabberd.cfg

Change hosts and admin section to your FQDN. Here's a example:

{hosts, ["lofyer.org"]}. {acl, admin, {user, "mypassword", "lofyer.org"}}.

Then you should restart ejabberd, and maybe a reboot is essential.

/etc/init.d/ejabberd restart

Enable Jingle(voice and video)

You need JingleNodes module on your server.

apt-get install erlang-tools

git clone git://git.process-one.net/exmpp/mainline.git exmpp

cd exmpp; ./configure; make; make install

svn checkout http://jinglenodes.googlecode.com/svn/ jinglenodes

cd jinglenodes; ./configure --prefix=/usr/; make; make install

Add following content to your ejabberd.cfg in the modules section.

{mod_jinglenodes, [ {host, "jinglenodes.@HOST@"}, {public_ip, "192.168.1.148"}, {purge_period, 5000}, {relay_timeout, 60000} ]},

Enable web register(optional)

Add to ejabberd.cfg, 'modules' section the basic configuration:

{modules, [ ... {mod_register_web, []}, ... ]}.

In the 'listen' section enable the web page:

{listen, [ ... {5281, ejabberd_http, [ tls, {certfile, "/etc/ejabberd/ejabberd.pem"}, {request_handlers, [ {["register"], mod_register_web} ]} ]}, ... ]}.

Use your own certificate

openssl req -new -x509 -newkey rsa:1024 -days 3650 -keyout privkey.pem -out server.pem openssl rsa -in privkey.pem -out privkey.pem cat privkey.pem >> server.pem rm privkey.pem

The port numbers you should open are: 5281(http://localhost:5281/register/) 5280(http://localhost:5280/admin) and 5222(for c2s).

Register users:

ejabberdctl register admin lofyer.org mypassword

ejabberdctl register myphone lofyer.org mypassword

ejabberdctl register mypc lofyer.org mypassword

2. Pidgin and MAXS test

Pidgin: [email protected] MAXS: [email protected] By the way, guarantee that there is only one running jabber client on your phone during this period. Pidgin Add a friend pidgin Shell test shell SMS test SMS-SEND And a msg to my GF. sms-receive

title: "foreman/puppet/cfengine/bcfg2/chef howto" date: 2014-02-21 categories: - "linux-admin"


Which one is the best automatic management tool TBD

title: "Gitlab quick deploy" date: 2014-06-08 categories: - "linux-admin"


Well, Gitweb + ssh://git@host is out of date. Even we use it for almost 2 years.

We are migrating our repositories to Gitlab which we benefit from its "issue" a lot.

Please follow this scripts I wrote.

https://raw.githubusercontent.com/lofyer/onekey-deploy/master/gitlab/install.sh

title: "Grafana+InfluxDB+Collectd/Telegraf on RPi2" date: 2017-03-06 categories: - "linux-admin"


Grafana will provide a visual view for the sites, InfluxDB is the data box, and collectd/telegraf is the agent on the server. Here we go.

Install Grafana:

Download deb from https://github.com/fg2it/grafana-on-raspberry

root@raspberrypi:~# rpm -i grafana.deb root@raspberrypi:~# service grafana-server start

Install InfluxDB: Download from https://portal.influxdata.com/downloads

root@raspberrypi:~# tar xf influxdb-1.2.0_linux_armhf.tar.gz root@raspberrypi:~# cp -a influxdb-1.2.0-1/* /

vim /etc/influxdb/influxdb.ini:

[admin] enabled=true

[http] enabled=true

[collectd] enabled=true bind-address=":25826" database="collectd"

Then run "influxdb &" and check it out in http://localhost:8083, add db named "collectd".

Install Collectd:

root@raspberrypi:~# apt-get install collectd

In /etc/collectd/collectd.conf, find :

Then restart collectd service.

Now you can visit http://localhost:3000 to add InfluxDB source and add panel.

title: "Hercules with Jason UI, emulator of IBM mainframe" date: 2017-04-26 categories: - "linux-admin"


Hercules is an open source software implementation of the mainframe System/370 and ESA/390 architectures, in addition to the new 64-bit z/Architecture. Hercules runs under Linux, Windows (98, NT, 2000, and XP), Solaris, FreeBSD, and Mac OS X (10.3 and later).

Online web interface.(deprecated)

Jason 1.00 is an integrated graphical frontend to the Hercules S/370, ESA/390 and z/Architecture Emulator. What, you haven't heard of Hercules before? It's a masterpiece that emulates IBM mainframes, from old good IBM System/360 and up to the modern z Series... No, it has nothing to do with IBM compatible... No, it can't emulate Xbox 360... Oh, you are asking what a mainframe is? Then probably you don't need Jason.

Download Hercules with Jason.

title: "Heartbeat and drbd test high availability" date: 2014-01-16 categories: - "linux-admin"


Hosts: 192.168.1.101 ha1.lofyer.org, 2 hard drive disks, two ethernet ports 192.168.1.103 ha2.lofyer.org, almost same as ha1

Server host, this is the IP of heartbeat service: 192.168.1.100

Install

The repos you need in centos

[epel] name=Extra Packages for Enterprise Linux 6 - $basearch

baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch

mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch failovermethod=priority enabled=1 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6

[elrepo] name=ELRepo.org Community Enterprise Linux Repository - el6 baseurl=http://elrepo.org/linux/elrepo/el6/$basearch/ mirrorlist=http://elrepo.org/mirrors-elrepo.el6 enabled=1 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-elrepo.org protect=0

yum install drbd84 kmod-drbd84 heartbeat mysql-server

Setup

1. Drbd configuration both hosts

Add following content to file: /etc/hosts

192.168.1.101 ha1.lofyer.org 192.168.1.103 ha2.lofyer.org

Disable selinux and iptables

sed -i 's/enforcing/permissive/' /etc/selinux/config

setenforce 0

chkconfig iptables off

service iptables stop

Prepare the disk partion

fdisk /dev/sdb << EOF

n p 1

w EOF

Configuration for mysql # mkdir db # sed -i 's/datadir=\/var\/lib\/mysql/datadir=\/db/' /etc/my.cnf Configuration for drbd file: /etc/drbd.conf

global { minor-count 64; usage-count yes; } common { syncer { rate 1000M; } } resource ha { protocol C; handlers { pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; local-io-error "/usr/lib/drbd/notify-local-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f"; fence-peer /usr/lib/heartbeat/drbd-peer-outdater -t 5; pri-lost "/usr/lib/drbd/notify-pri-lost.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; split-brain "/usr/lib/drbd/notify-split-brain.sh root"; out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root"; } startup { wfc-timeout 60; degr-wfc-timeout 120; outdated-wfc-timeout 2; } disk { on-io-error detach; fencing resource-only; } syncer { rate 1000M; } on ha1.lofyer.org { device /dev/drbd0; disk /dev/sdb1; address 192.168.1.101:7788; meta-disk internal; } on ha2.lofyer.org { device /dev/drbd0; disk /dev/sdb1; address 192.168.1.103:7788; meta-disk internal; } }

Chmod for drbd

# chgrp haclient /sbin/drbdsetup # chmod o-x /sbin/drbdsetup # chmod u+s /sbin/drbdsetup # chgrp haclient /sbin/drbdmeta # chmod o-x /sbin/drbdmeta # chmod u+s /sbin/drbdmeta

Resource for drbd

# modprobe drbd # dd if=/dev/zero of=/dev/hdb1 bs=1M count=100 # drbdadm create-md ha # service drbd start # chkconfig drbd on

Watch drbd status

watch -n 1 service drbd status

You can see that both hosts are Secondary/Secondary.

2. Drbd configuration on one of hosts, like ha1

Make ha1 Primary

# drbdadm -- --overwrite-data-of-peer primary ha # service drbd status

Then you should see Primary and wait for both hosts are UpToDate. Initialization for Mysql Make a

mkfs.ext4 /dev/drbd0

mount /dev/drbd0 /db

service mysqld start

Now you should see what you have got in /db, then umount /db, stop mysql-server and make ha1 Secondary.

service mysqld stop

umount /dev/drbd0

drbdadm secondary ha

3. Heartbeat configuration on both hosts

cluster authkey

(echo -ne "auth 1\n1 sha1 "; dd if=/dev/urandom bs=512 count=1 | openssl md5) > /etc/ha.d/authkeys

cat /etc/ha.d/authkeys

auth 1 1 sha1 71461fc5e160d7846c2f4b524f952128

chmod 600 /etc/ha.d/authkeys

scp /etc/ha.d/authkeys node2:/etc/ha.d/

YOU SHOULD MODIFY THE IP IN THE FILE. file: /etc/ha.d/ha.cf

debugfile /var/log/ha-debug logfile /var/log/ha-log logfacility local0 autojoin none ucast eth0 192.168.1.101 ucast eth1 192.168.1.102 ping 192.168.1.100 respawn hacluster /usr/lib64/heartbeat/ipfail respawn hacluster /usr/lib64/heartbeat/dopd apiauth dopd gid=haclient uid=hacluster udpport 694 warntime 5 deadtime 15 initdead 60 keepalive 2 node ha1.lofyer.org node ha2.lofyer.org auto_failback off

The service will be serve on IP 192.168.1.100. file: /etc/ha.d/haresources

mysql.lofyer.org 192.168.1.100 drbddisk::ha Filesystem::/dev/drbd0::/db::ext4 mysql

If you just wanna a virtual ip, use this

hosta.vf.com IPaddr::192.168.0.100/24/eth0:0

Add mysql entry to heartbeat file: /etc/ha.d/resource.d/mysql

!/bin/bash

. /etc/ha.d/shellfuncs case "\(1" in start) res=\`/etc/init.d/mysqld start\` ret=\)? ha_log $res exit \(ret ;; stop) res=\`/etc/init.d/mysqld stop\` ret=\)? ha_log $res exit $ret ;; status) if [[ `ps -ef | grep '[m]ysqld'` > 1 ]]; then echo "running" else echo "stopped" fi ;; *) echo "Usage: mysqld {start|stop|status}" exit 1 ;; esac exit 0

Add excute permission to it.

chmod 755 /etc/ha.d/resource.d/mysql

Add heartbeat service to system

chkconfig --add heartbeat

chkconfig heartbeat on

service heartbeat start

You may need modify order of drbd and heartbeat service. In /etc/init.d/, the number 85 and 15 represent the order number which the script is to be run at start up time and shutdown time. # chkconfig: - 85 15

Test HA


title: "ltsp相关" date: 2013-03-15 categories: - "linux-admin"


参考: https://help.ubuntu.com/community/UbuntuLTSP

安装: apt-get install ltsp-server-standalone

启动要素: nbd-server dhcpd tftp-hpa openssh-server

绑定客户端地址: [dhcpd.conf] host client201 { hardware ethernet 08:00:27:89:70:01; fixed-address 192.168.0.201; } 另外一种是在启动pxe配置文件中指定 http://wiki.phys.ethz.ch/readme/setting_up_an_ltsp_server_for_diskless_clients

session & windows [/usr/share/xsession/*] Exec=/root/.xsession

[/root/.xsession] #!/bin/bash gnome-session & firefox logout

获取session list [/usr/share/ldm/ldminfod] [/etc/X11/xinit/Xsession] [/etc/X11/Xsession] **failsafe [/etc/X11/xdm/Xsession] [/usr/lib/X11/xdm/Xsession] *[/usr/share/xsession]

**default session exported by Xsession.d echo $DEFAULTS_PATH /usr/share/gconf/

修改default session [/var/lib/tftp.../lts.conf] LDM_SESSION="gnome-session &;firefox;logout"

FatClient [/var/lib/tftp.../lts.conf] [default] LDM_DIRECTX=true

[00:A1:08:EB:43:27] LTSP_FATCLIENT=false

AutoLogin [/var/lib/tftp.../lts.conf] [Default] LDM_AUTOLOGIN = True

[192.168.1.101] LDM_USERNAME = user1 LDM_PASSWORD = password1

[192.168.1.102] LDM_USERNAME = user2 LDM_PASSWORD = password2

一些参考配置 [lts.conf] # Global defaults for all clients # if you refer to the local server, just use the # "server" keyword as value # see lts_parameters.txt for valid values ################ [default] X_COLOR_DEPTH=24 LOCALDEV=True SOUND=True USE_LOCAL_SWAP=True NBD_SWAP=False SYSLOG_HOST=server #XKBLAYOUT=de SCREEN_02=shell SCREEN_03=shell SCREEN_04=shell SCREEN_05=shell SCREEN_06=shell SCREEN_07=ldm # LDM_DIRECTX=True allows greater scalability and performance # Turn this off if you want greater security instead. LDM_DIRECTX=True # LDM_SYSLOG=True writes to server's syslog LDM_SYSLOG=True

title: "IPA服务器搭建" date: 2012-11-27 categories: - "linux-admin"


IPASERVER+DNS(DDWRT)+IPACLIENT

SERVER: ipa-server-install时这里可以不要内部dns 注意host以及domain要在dns里有记录 添加用户 ipa user-add 设置密码 ipa passwd demo DNS: 下面是ddwrt的dnsmasq配置 domain=ovirt.engine local=/ovirt.engine/ expand-hosts address=/ovirtmgmt.ovirt.engine/192.168.1.106 ptr-record=106.1.168.192.in-addr.arpa,"ovirtmgmt.ovirt.engine" address=/ipa.ovirt.engine/192.168.1.108 ptr-record=108.1.168.192.in-addr.arpa,"ipa.ovirt.engine" srv-host=_kerberos-master._tcp,ipa.ovirt.engine,88,0,100 srv-host=_kerberos-master._udp,ipa.ovirt.engine,88,0,100 srv-host=_kerberos._tcp,ipa.ovirt.engine,88,0,100 srv-host=_kerberos._udp,ipa.ovirt.engine,88,0,100 srv-host=_kpasswd._tcp,ipa.ovirt.engine,464,0,100 srv-host=_kpasswd._udp,ipa.ovirt.engine,464,0,100 srv-host=_ldap._tcp,ipa.ovirt.engine,389,0,100

IPACLIENT: install时注意域名及主机名正确

OVIRT: 初次使用要在SERVER运行 kinit admin

reinstall出错时: # ipa-server-install --uninstall -U # ls -ld /var/lib/pki-ca If it exists run: # pkiremove -pki_instance_root=/var/lib -pki_instance_name=pki-ca --force # yum reinstall pki-selinux

title: "Intergrate owncloud with AD(LDAP)" date: 2014-04-24 categories: - "linux-admin"


Windows 2008R2 server with AD role built. User group: owncloudgrp User in owncloudgrp: aaa, beta Users must have logon name, first name, last name set.

Configure the owncloud:

Server:

oc1

User Filter:

oc2

Login Filter:

oc3

Group Filter: Every time you change these two sections, wait for a few seconds until more than zero users discovered.

oc4

Advanced - Connection Settings:

oc5

Advanced - Directory Settings:

oc6

Expert: Add internal username: sAMAccountName

oc7

title: "use Foreman/Nagios/Icinga to make life easy..." date: 2013-09-24 categories: - "linux-admin"


Install nagios in Gentoo/CentOS

Gentoo

emerge nagios

Option: recompile apache for php support

add use flag "apache2" to /etc/portage/make.conf

emerge --ask --changed-use --deep @world

Copy following content to /etc/apache2/vhosts.d/

ScriptAlias /nagios/cgi-bin "/usr/lib64/nagios/cgi-bin"

SSLRequireSSL

1
2
3
4
Options ExecCGI
AllowOverride None
Order allow,deny
Allow from all

Order deny,allow

Deny from all

Allow from 127.0.0.1

1
2
3
4
AuthName "Nagios Access"
AuthType Basic
AuthUserFile /etc/nagios/auth.users
Require valid-user

Alias /nagios "/usr/share/nagios/htdocs"

SSLRequireSSL

1
2
3
4
Options None
AllowOverride None
Order allow,deny
Allow from all

Order deny,allow

Deny from all

Allow from 127.0.0.1

1
2
3
4
AuthName "Nagios Access"
AuthType Basic
AuthUserFile /etc/nagios/auth.users
Require nagiosadmin

Create password for nagiosadmin

htpasswd2 -c /etc/nagios/auth.users nagiosadmin

Add NAGIOS to apache config

/etc/conf.d/apache

APACHE2_OPTS="... -D NAGIOS -D PHP5"

Add user nagios to apache group

usermod -a -G nagios apache

Start service

rc-service nagios restart

rc-service apache2 restart

CentOS

yum install "nagios*"

htpasswd -c /etc/nagios/passwd admin

chkconfig nagios on

chkconfig httpd on

service nagios start

service httpd start

Add routers/hosts, add service, add hooks

Intergrate with oVirt

using Foreman

Install

USE

Intergrate with oVirt

TBD

title: "OAuth2 Guide" date: 2017-07-28 categories: - "linux-admin" - "draft"


这是一篇OAuth2的入门短文,这就开始。

整体可以参考Hydra OAuth2的搭建过程,非常详细,过程就不讲了,直接看图更直观。

建议过程:在host上搭建server以后,创建client、user、callback_url,还能绑定OpenID,多好。

title: "OpenLDAP step by step how-to" date: 2014-04-14 categories: - "linux-admin"


I need an authentication system with compatibility and many extended features(like bio-device). So, I've got AD, IPA and OpenLDAP to choose from. AD comes from MS and it is too "heavy" for the not-very-large system. IPA and OpenLDAP are almost same, but I prefer latter, since it's compatible with oVirt(This why I choose CentOS rather than debian).

The simplest OpenLDAP server

A basic LDAP without any security or additional features.

OpenLDAP with SASL

Add SASL to our LDAP.

OpenLDAP with SAMBA

To add Windows PC to our domain.

OpenLDAP with Kerberos

This is what we want finally. ============================================================

1. The simplest OpenLDAP server

I've got 2 ways to setup an openldap server: 389-ds script and manually configure.

1.1 Using 389-ds script

Here's the original article.

Preparation

Before setup, this configuration should be modified. Add following:

192.168.1.80 ldap.lofyer.org

Add following:

net.ipv4.tcp_keepalive_time = 30 net.ipv4.ip_local_port_range = 1024 65000 fs.file-max = 64000

Add following:

* soft nofile 8192 * hard nofile 8192

Add following:

session required /lib/security/pam_limits.so

Then reboot the machine to make above configurations work.

Setup 389-ds

useradd ldapadmin

passwd ldapadmin

yum install -y 389-ds openldap-clients

setup-ds-admin.pl

Then you'll see some questions like this(sorry for the high-lighting...):

============================================================================== This program will set up the 389 Directory and Administration Servers.

It is recommended that you have "root" privilege to set up the software. Tips for using this program: - Press "Enter" to choose the default and go to the next screen - Type "Control-B" then "Enter" to go back to the previous screen - Type "Control-C" to cancel the setup program

Would you like to continue with set up? [yes]: ## Press Enter ##

============================================================================== Your system has been scanned for potential problems, missing patches, etc. The following output is a report of the items found that need to be addressed before running this software in a production environment.

389 Directory Server system tuning analysis version 23-FEBRUARY-2012.

NOTICE : System is i686-unknown-linux2.6.32-431.el6.i686 (1 processor).

WARNING: 622MB of physical memory is available on the system. 1024MB is recommended for best performance on large production system.

WARNING : The warning messages above should be reviewed before proceeding.

Would you like to continue? [no]: yes ## Type Yes and Press Enter ##

============================================================================== Choose a setup type: 1. Express Allows you to quickly set up the servers using the most common options and pre-defined defaults. Useful for quick evaluation of the products. 2. Typical Allows you to specify common defaults and options. 3. Custom Allows you to specify more advanced options. This is recommended for experienced server administrators only. To accept the default shown in brackets, press the Enter key.

Choose a setup type [2]: ## Press Enter ##

============================================================================== Enter the fully qualified domain name of the computer on which you're setting up server software. Using the form . Example: eros.example.com.

To accept the default shown in brackets, press the Enter key.

Warning: This step may take a few minutes if your DNS servers can not be reached or if DNS is not configured correctly. If you would rather not wait, hit Ctrl-C and run this program again with the following command line option to specify the hostname:

1
General.FullMachineName=your.hostname.domain.name

Computer name [ldap.lofyer.org]: ## Press Enter ##

============================================================================== he servers must run as a specific user in a specific group. It is strongly recommended that this user should have no privileges on the computer (i.e. a non-root user). The setup procedure will give this user/group some permissions in specific paths/files to perform server-specific operations.

If you have not yet created a user and group for the servers, create this user and group using your native operating system utilities.

System User [nobody]: ldapadmin ## Enter LDAP user name created above # System Group [nobody]: ldapadmin

============================================================================== Server information is stored in the configuration directory server. This information is used by the console and administration server to configure and manage your servers. If you have already set up a configuration directory server, you should register any servers you set up or create with the configuration server. To do so, the following information about the configuration server is required: the fully qualified host name of the form .(e.g. hostname.example.com), the port number (default 389), the suffix, the DN and password of a user having permission to write the configuration information, usually the configuration directory administrator, and if you are using security (TLS/SSL). If you are using TLS/SSL, specify the TLS/SSL (LDAPS) port number (default 636) instead of the regular LDAP port number, and provide the CA certificate (in PEM/ASCII format).

If you do not yet have a configuration directory server, enter 'No' to be prompted to set up one. Do you want to register this software with an existing configuration directory server? [no]: ## Press Enter ##

============================================================================== Please enter the administrator ID for the configuration directory server. This is the ID typically used to log in to the console. You will also be prompted for the password. Configuration directory server administrator ID [admin]: ## Press Enter ## Password: ## create password ## Password (confirm): ## re-type password ##

============================================================================== The information stored in the configuration directory server can be separated into different Administration Domains. If you are managing multiple software releases at the same time, or managing information about multiple domains, you may use the Administration Domain to keep them separate.

If you are not using administrative domains, press Enter to select the default. Otherwise, enter some descriptive, unique name for the administration domain, such as the name of the organization responsible for managing the domain.

Administration Domain [lofyer.org]: ## Press Enter ##

============================================================================== The standard directory server network port number is 389. However, if you are not logged as the superuser, or port 389 is in use, the default value will be a random unused port number greater than 1024. If you want to use port 389, make sure that you are logged in as the superuser, that port 389 is not in use. Directory server network port [389]: ## Press Enter ##

============================================================================== Each instance of a directory server requires a unique identifier. This identifier is used to name the various instance specific files and directories in the file system, as well as for other uses as a server instance identifier.

Directory server identifier [server]: ## Press Enter ##

============================================================================== The suffix is the root of your directory tree. The suffix must be a valid DN. It is recommended that you use the dc=domaincomponent suffix convention. For example, if your domain is example.com, you should use dc=example,dc=com for your suffix. Setup will create this initial suffix for you, but you may have more than one suffix. Use the directory server utilities to create additional suffixes.

Suffix [dc=lofyer, dc=org]: ## Press Enter ##

=============================================================================

Certain directory server operations require an administrative user. This user is referred to as the Directory Manager and typically has a bind Distinguished Name (DN) of cn=Directory Manager. You will also be prompted for the password for this user. The password must be at least 8 characters long, and contain no spaces. Press Control-B or type the word "back", then Enter to back up and start over. Directory Manager DN [cn=Directory Manager]: ## Press Enter ## Password: ## Enter the password ## Password (confirm):

============================================================================== The Administration Server is separate from any of your web or application servers since it listens to a different port and access to it is restricted.

Pick a port number between 1024 and 65535 to run your Administration Server on. You should NOT use a port number which you plan to run a web or application server on, rather, select a number which you will remember and which will not be used for anything else. Administration port [9830]: ## Press Enter ##

============================================================================== The interactive phase is complete. The script will now set up your servers. Enter No or go Back if you want to change something.

Are you ready to set up your servers? [yes]: ## Press Enter ## Creating directory server . . . Your new DS instance 'server' was successfully created. Creating the configuration directory server . . . Beginning Admin Server creation . . . Creating Admin Server files and directories . . . Updating adm.conf . . . Updating admpw . . . Registering admin server with the configuration directory server . . . Updating adm.conf with information from configuration directory server . . . Updating the configuration for the httpd engine . . . Starting admin server . . . output: Starting dirsrv-admin: output: [ OK ] The admin server was successfully started. Admin server was successfully created, configured, and started. Exiting . . . Log file is '/tmp/setupo1AlDy.log'

Then make these two services start on startup.

chkconfig dirsrv on

chkconfig dirsrv-admin on

With 389-ds scripts, you could use 389-console, please refer to the link above.

1.2 Manually configure

Here's the original article.

Install the packages

yum install openldap{,-clients,-servers}

Change the configuration

/etc/openldap/slapd.d/cn\=config.ldif Delete olcAllows: bind_v2 if you want only v3. Modify olcIdleTimeout from 0 to 30 if you want close the idle connection for more than 30 seconds.

Before next step, run this command to generate a SHA encrypted password.

slappasswd

New password: Re-enter new password: {SSHA}aW7TYJ3faz13RKsnr3uiCsbgi55RKhW9

Then copy the output to your clipboard.

/etc/openldap/slapd.d/cn\=config/olcDatabase\=\{2\}bdb.ldif Modify olcSuffix, RootDN, olcRootPW to this:

... olcSuffix: dc=lofyer, dc=org olcRootPW: {SSHA}aW7TYJ3faz13RKsnr3uiCsbgi55RKhW9 RootDN: cn=admin, dc=lofyer, dc=org ...

Start service

service slapd start

chkconfig slpad on

Add rootdn and groups

dn: dc=lofyer,dc=org objectclass: dcObject objectclass: organization o: Lofyer Org dc: lofyer

dn: ou=People,dc=lofyer,dc=org objectClass: organizationalUnit objectClass: top ou: People

dn: ou=Groups,dc=lofyer,dc=org objectClass: organizationalUnit objectClass: top

ou: Groups dn: cn=admin,dc=lofyer,dc=org objectclass: organizationalRole cn: admin

Import the ldif:

ldapadd -x -D "cn=admin,dc=lofyer,dc=org" -W -f /etc/openldap/schema/lofyer.org.ldif

ldapsearch -x -b 'dc=lofyer,dc=org' '(objectclass=*)'

Create a user

Add following content to user.ldif

dn: uid=demo,ou=People,dc=lofyer,dc=org objectclass: top objectclass: person objectclass: inetOrgPerson objectclass: organizationalPerson uid: demo cn: demo sn: demo givenName: demo

Provide a password:

ldapadd -x -W -D "cn=admin,dc=lofyer,dc=org" -f user.ldif

New password: Re-enter new password: Enter LDAP Password:

Add or delete a member from group(myteam)

Add: dn: cn=myteam,ou=Groups,dc=lofyer,dc=org changetype: modify add: member member: uid=user1,ou=People,dc=lofyer,dc=org

ldapmodify -x -D "cn=admin,dc=lofyer,dc=org" -W -f add.ldif

Delete:

dn: cn=myteam,ou=Groups,dc=lofyer,dc=org changetype: modify delete: member member: uid=user1,ou=People,dc=lofyer,dc=org

ldapmodify -x -D "cn=admin,dc=lofyer,dc=org" -W -f delete.ldif

Use TSL

Here's the original article.

(NOT NECESSARY)Generate CA

Follow this script.

!/bin/bash

Change to the directory and clear out the old certs

cd /etc/openldap/certs rm -rf *

This echo statement is actually putting the word “password” (without the quotes) in a temporary password file to help

automate the process. This will be the password for your certificate. Change this as appropriate

echo "mypassword" > /etc/openldap/certs/password export PATH=/usr/bin/:$PATH echo falkdjfdajkasdndwndoqndwapqmhfaksj >> noise.txt

Associate the password with the certificates which will be generated in the current directory

certutil -N -d . -f /etc/openldap/certs/password certutil -G -d . -z noise.txt -f /etc/openldap/certs/password

Generate a CA certificate for the 389 server

certutil -S -n "CA certificate" -s "cn=CACert" -x -t "CT,," -m 1000 -v 120 -d . -z /etc/openldap/certs/noise.txt -f /etc/openldap/certs/password

anwsers are Y, , Y

This builds the server cert

certutil -S -n "OpenLDAP Server" -s "cn=ldap.lofyer.org" -c "CA certificate" -t "u,u,u" -m 1001 -v 120 -d . -z /etc/openldap/certs/noise.txt -f /etc/openldap/certs/password

This exports the cacert in case you need it

pk12util -d . -o cacert.p12 -n "CA certificate"

This exports the server-cert which you will need on the windows AD

pk12util -d . -o servercert.p12 -n "OpenLDAP Server"

This exports the CA cert for ldap clients

certutil -L -d . -n "CA certificate" -a > /etc/openldap/certs/cacert.pem

Make the files in here readable

chmod 644 *

Set the system to use LDAPS

sed -i 's/SLAPD_LDAPS=no/SLAPD_LDAPS=yes/g' /etc/sysconfig/ldap

Add a firewall exception in case the user has not configured their firewall properly

iptables -I INPUT -m state --state NEW -p tcp --dport 636 -j ACCEPT

/etc/init.d/iptables save

Restart slapd to make the changes take effect

/etc/init.d/slapd restart

I think you should notice that the private key password is "mypassword". Then you will get three files: cacert.p12, cacert.pem, servercert.p12. And, that's all.

2. Add SASL to OpenLDAP

OKay, we'll add SASL to our ldap connections.

Install cyrus-sasl package.

yum install cyrus-sasl-gssapi

yum install cyrus-sasl-ldap


title: "owncloud webdav intergration" date: 2013-01-03 categories: - "linux-admin"


安装好owncloud后,可以使用webdav协议进行远程挂载,比如

mount -t davfs {http://localhost/remote.php/webdav,http://localhost/files/webdav.php} /mnt

2003上需启动webclient服务 而在2008上, webclient被集成到desktop experience组件中, 从service manager->feature中添加desktop experience,并且修改注册表HKEY_LOCAL_MACHINE->SYSTEM->CurrentControlSet->Services->WebClient->Parameters->BasicAuthLevel 的值为 2,重启服务 此时可映射网络驱动器,钩选“connect using different credentials”,输入其用户名密码即可 或者命令行

NET USE * http://localhost/remote.php/webdav 123456 /user:admin

PS: 1. owncloud第三方app WebDev安装后会造成不能添加用户的麻烦 2. 修改owncloud目录权限为apache.apache或者www-data.www-data以使.htaccesss生效

title: "Using X server in Windows Linux Subsystem" date: 2016-10-09 categories: - "linux-admin"


1. Turn on "Developer Mode" in Control panel. developer-mode sss

2. Run "bash" bash

3. Install Xming(Xserver for Windows) Download

4. Launch your app

export DISPLAY=:0

firefox

launch

5. You can create a link on your desktop like this aaa

and ~/.bashrc

alias home='cd /mnt/c/Users/rex/Desktop' home export DISPLAY=:0

Tips:

Use "powershell bash" instead of "bash", you can access your service in this way.

title: "Configure corosync and pacemaker" date: 2015-02-28 categories: - "linux-admin"


Env: node1 eth0 192.168.0.201 node2 eth0 192.168.0.202

1. Install essential packages

Add following content to /etc/yum.repos.d/ha.repo, since you will need crmsh later:

[haclustering] name=HA Clustering baseurl=http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/ enabled=1 gpgcheck=0

Install packages:

yum install pacemaker corosync crmsh -y

2. Configure corosync

Using configuration files below if you need broadcast:

service { # Load the Pacemaker Cluster Resource Manager ver: 0 name: pacemaker use_mgmtd: no use_logd: no }

totem { version: 2 secauth: off interface { member { memberaddr: 192.168.0.201 } member { memberaddr: 192.168.0.202 } ringnumber: 0 bindnetaddr: 192.168.0.0 mcastport: 5405 ttl: 1 } transport: udpu }

logging { fileline: off to_logfile: yes to_syslog: yes logfile: /var/log/cluster/corosync.log debug: off timestamp: on logger_subsys { subsys: AMF debug: off } }

Here's a sample using multicast:

service { # Load the Pacemaker Cluster Resource Manager ver: 0 name: pacemaker use_mgmtd: no use_logd: no }

totem { version: 2

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
    # secauth: Enable mutual node authentication. If you choose to
    # enable this ("on"), then do remember to create a shared
    # secret with "corosync-keygen".
    secauth: off

    threads: 0

    # interface: define at least one interface to communicate
    # over. If you define more than one interface stanza, you must
    # also set rrp\_mode.
    interface {
            ringnumber: 0
            bindnetaddr: 192.168.1.0
            mcastaddr: 239.255.1.1
            mcastport: 5405
            ttl: 1
    }

}

logging { fileline: off to_stderr: no to_logfile: yes logfile: /var/log/cluster/corosync.log to_syslog: yes debug: off timestamp: on logger_subsys { subsys: AMF debug: off } }

Note that if the ver in the service section of pacemaker is 0, pacemaker will be loaded automatically, or else you will start the pacemaker service manually.

Copy this configuration file to the other host and start the service:

scp /etc/corosync/corosync.conf [email protected]:/etc/corosync/corosync.conf

On 192.168.0.201:

chkconfig corosync on

service corosync start

On 192.168.0.202:

chkconfig corosync on

service corosync start

3. Configure corosync using crmsh

Add virtual IP to your cluster:

crm configure

crm(live)configure# primitive vip1 ocf💓IPaddr2 params ip=192.168.0.209 cidr_netmask=24 op monitor interval=10s crm(live)configure# property stonith-enabled=false # To prevent split-brain crm(live)configure# property no-quorum-policy=stop # To prevent split-brain crm(live)configure# commit

Test:

crm(live)configure# migrate vip1 crm(live)configure# unmigrate vip1

You will see 192.168.0.209 migrating between these two nodes.

title: "Deploy Skype for Business Server 2015" date: 2015-08-12 categories: - "cloud-infra"


Server Preparation

ad(ad.virtfan.com): Windows Server 2012 R2 lync(lync.virtfan.com): Windows Server 2012 R2 Following instruction is for LAN.

Ref: https://technet.microsoft.com/en-us/library/dn933900.aspx Install lync 2013 server in win2008r2

Procedure

1. After a fresh installation of Windows Server 2012 R2, we will update it to latest. 2. We need create an AD DS with AD CS in ad.virtfan.com. So that we can retrieve CA to complete our deployment. 3. Then we start setting up Skype Server in lync.virtfan.com(in domain virtfan.com). 4. Set up DNS, add more users, use lync to communicate.

AD Preparation

Following steps are in ad.virtfan.com. 1. Assign a static IP. Change computer name to ad. 屏幕快照 2015-08-13 上午11.42.27 2. Add Active Directory Domain Service and DNS Roles. Create a new forest(virtfan.com) with level Windows 2008 R2. 屏幕快照 2015-08-13 上午11.59.27 3. Add Active Directory Certificate Service Role with all the six features checked. 屏幕快照 2015-08-13 下午12.07.28 4. (Optional)Run gpmc.msc, right click on the Default Domain Policy to edit. Change Password Complexity to False. Run gpupdate /force to update the group policy. 屏幕快照 2015-08-13 下午12.15.33

Skype Server Preparation

Following steps are in lync.virtfan.com. Make sure you have got more than 32GB space in C:. 1. Assign a static IP, change name to lync and join domain virtfan.com. Add following features: .Net Framework 3.5, .Net Framework 4.5 -> WCF Services -> HTTP Activation, Media Foundation, Remote Server Administration Tools -> Role Administration Tools -> AD DS and AD LDS Tools -> Windows Identity Foundation 3.5 2. Add IIS Role with following features: 静态内容、默认文档、HTTP 错误、ASP.NET、.NET 扩展性、Internet 服务器 API (ISAPI)扩展、ISAPI 筛选器、HTTP 日志记录、日志记录工具、跟踪、客户端证书映射身份验证、Windows 身份验证、请求筛选、静态内容压缩、动态内容压缩、IIS 管理控制台、IIS 管理脚本和工具 3. Logon as VIRTFAN\Administrator and add feature .Net3.5. 4. Install KB2982006. 5. Mount Skype Business 2015 ISO and run Setup to install.

Setting up Skype Server

Following steps are in lync.virtfan.com and logon as VIRTFAN\Administrator and add feature .Net3.5. 1. Create a directory in C:\share, make it sharable and writable. 2. Run Skype for Business Server 部署向导(Deploy Wizard) from Start menu. 屏幕快照 2015-08-13 下午1.47.56 3. Then we will are going to follow the steps in Prepare Active Directory. 屏幕快照 2015-08-13 下午1.51.16 4. Click Prepare the first Standard Edition Server. It will create database. 5. Install management utilities. 6. Run Skype for Business Server topology generator from Start menu to generate a topology. 屏幕快照 2015-08-13 下午2.31.16 7. Create a new topology like this: 屏幕快照 2015-08-13 下午2.37.53 屏幕快照 2015-08-13 下午2.38.07 屏幕快照 2015-08-13 下午2.39.23 屏幕快照 2015-08-13 下午2.40.04 屏幕快照 2015-08-13 下午2.40.47 屏幕快照 2015-08-13 下午2.41.15 屏幕快照 2015-08-13 下午2.41.44 屏幕快照 2015-08-13 下午2.42.14 屏幕快照 2015-08-13 下午2.42.49 屏幕快照 2015-08-13 下午2.43.32 屏幕快照 2015-08-13 下午2.43.58 屏幕快照 2015-08-13 下午2.44.27 Click Finish and right click on the Skype for Business Server to edit property. Fill in the admin url like: https://admin.virtfan.com Select a fronted server as central server. 屏幕快照 2015-08-13 下午2.46.44 8. Publish topology. 9. Click Install or update Skype for Business Server System. And follow its guide. 屏幕快照 2015-08-13 下午2.52.18 10. When you are in Step.3(Assign Certificate). Click Request to request certificate from ad.virtfan.com. 屏幕快照 2015-08-13 下午2.53.49 屏幕快照 2015-08-13 下午2.54.39 屏幕快照 2015-08-13 下午2.54.39 11. Run start-cspool from cmd to start the server. Warning is OK, error is not OK. 12. Define your DNS and port-forwarding(443) so that we can use Skype from WAN.

https://meet.virtfan.com -> lync.virtfan.com's IP https://lync.virtfan.com -> lync.virtfan.com's IP https://dialin.virtfan.com -> lync.virtfan.com's IP https://admin.virtfan.com -> lync.virtfan.com's IP (optional)https://ad.virtfan.com -> ad.virtfan.com's IP

13. Add domain users and assign users via https://admin.virtfan.com. 屏幕快照 2015-08-13 下午3.10.58

Lync/Skype Client

1. Install Lync/Skype within Microsoft Office 2013 or Office365. 2. Download and install CA from https://ad.virtfan.com/certsrv/ or you can put it somewhere else. 3. Configure your client like this: lync 4. Click Logon.

Here you go!

WAN

If you are using servers behind a firewall or a router, you should add something like this. 1. On your DNS provider, set these 6 A records lync/admin/dialin/meet/lyncdiscover/lyncdiscoverinternal.virtfan.com to your WAN IP. 2. Port forward from 443,5601 to lync server LAN IP. (Alternative)2. If you are using Apache virtualhost, you'll need export certificate and its private key of lync.virtfan.com to the Apache server with jailbreak and configure all 6 domain names like this:

vi /apache/conf.d/ssl.conf

... ServerName skype.virtfan.com SSLEngine on SSLProxyEngine on SSLCertificateFile /etc/httpd/conf.d/lync-ca/lync.virtfan.com.cer SSLCertificateKeyFile /etc/httpd/conf.d/lync-ca/lync.virtfan.com.key ProxyRequests Off ProxyPass / https://skype.virtfan.com/ ProxyPassReverse / https://skype.virtfan.com/ ...

vi /etc/httpd/conf.d/vproxy.conf

ServerName lyncdiscover.virtfan.com ProxyRequests Off ProxyPass / http://lyncdiscover.virtfan.com/ ProxyPassReverse / http://lyncdiscover.virtfan.com/

ServerName lyncdiscoverinternal.virtfan.com ProxyRequests Off ProxyPass / http://lyncdiscoverinternal.virtfan.com/ ProxyPassReverse / http://lyncdiscoverinternal.virtfan.com/

vi /etc/hosts

... 192.168.122.222 admin.virtfan.com 192.168.122.222 lync.virtfan.com 192.168.122.222 dialin.virtfan.com 192.168.122.222 skype.virtfan.com 192.168.122.222 meet.virtfan.com 192.168.122.222 lyncdiscover.virtfan.com 192.168.122.222 lyncdiscoverinternal.virtfan.com ...

And configure iptables:

iptables -t nat -A PREROUTING -p tcp --dport 5061 -j DNAT --to-destination 192.168.122.222:5061

3. DO NOT ADD "Internal Server" in your lync client, "External Server" will be enough.

设计,kvm前端,mybox

正好趁现在有设计欲的时候给自己的git服务器换个首页。 kvm前端的脚本已完成,瞬间不想写gui的了,虽然ncurses很好用。。要不先这样,反正都是用来调virt和spice的系统

oVirt相关

Ovirt Just make some notes: # need net qemu-kvm -m 1024 -localtime -M pc -smp 2 -drive file=ovirt.qcow2,cache=writeback,boot=on -boot d -name kvm-ovirt,process=ovirt -usb -usbdevice tablet Ovirt provided a fedora.iso with node in it. Ao is building the engine.

on virtfan spicec -h localhost -p 5910

node应该是自动加入engine的,结果是现在能加入,但是不能安装 nmap后engine确实是有8443,node只有22,安装过程是交互的(node通过外网获取安装包,返回状态到engine)

改变端口为443,按照troubleshooting更改engine上的nfs服务。f17上nfs默认建立服务为v4,要改为v3,并且添加group和user id分别为36的vdsm与kvm,用提供的脚本测试下。 Troubleshooting_NFS_Storage_Issues

ok,等明天加入节点就可以了。

Building ovirt from source with cmdline within an ide install jboss maven plugin on eclipse

here's the engine arch

and here's the engine-core arch

运行engine-manage-domain需要ipa-server,类似windows的ActiveDirectory,需要不同于engine的主机上安装ipa 安装ipa-server

title: "Add nat to ovirt via vdsm hooks" date: 2014-05-04 categories: - "cloud-infra"


OK, I do not like control group very much for now.

Comment device section in control group config, sorry for my laziness...

mount { cpuset = /cgroup/cpuset; cpu = /cgroup/cpu; cpuacct = /cgroup/cpuacct; memory = /cgroup/memory;

devices = /cgroup/devices;

1
    freezer = /cgroup/freezer;

net_cls = /cgroup/net_cls;

1
    blkio   = /cgroup/blkio;

}

Enable ip forward

net.ipv4.ip_forward = 1

Reboot the host

Connect to virsh to enable libvirt's default virbr0(NAT)

If you don't know the password for your account, just use command below to create one.

saslpasswd2 -a libvirt root

Create a nat network.

nat b42e377d-e849-4c36-bd98-3d090def5ecc

virsh net-create /etc/libvirt/qemu/networks/nat.xml

virsh net-autostart nat

virsh net-start nat

Create tun device and add it to virbr0

UPDATE: This could be ignored if you use extnet.py

tunctl -t nat0 -u qemu

brctl addif virbr1 nat0

Add hook file to vdsm

UPDATE: use extnet from github with a little modification (Only the first vNIC will be NAT, the second one still keeps its way).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
#!/usr/bin/python

import os
import sys
import traceback
import xml.dom
import hooking

def replaceSource(interface, newnet):
    source, = interface.getElementsByTagName('source')
    source.removeAttribute('bridge')
    source.setAttribute('network', newnet)
    interface.setAttribute('type', 'network')

def main():
    params = "default"
    os.environ.\_\_setitem\_\_("extnet",params)
    newnet = os.environ.get('extnet')
    if newnet is not None:
        doc = hooking.read\_domxml()
        interface = doc.getElementsByTagName('interface')[0]
        replaceSource(interface, newnet)
        hooking.write\_domxml(doc)
def test():

    interface = xml.dom.minidom.parseString("""
    """).getElementsByTagName('interface')[0]

    print "Interface before forcing network: %s" % \\
        interface.toxml(encoding='UTF-8')

    replaceSource(interface, 'yipee')
    print "Interface after forcing network: %s" % \\
        interface.toxml(encoding='UTF-8')

if \_\_name\_\_ == '\_\_main\_\_':
    try:
        if '--test' in sys.argv:
            test()
        else:
            main()
    except:
        hooking.exit\_hook('extnet hook: [unexpected error]: %s\\n' %
                          traceback.format\_exc()) 

QEMU-CMD way:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
import os
import sys
import hooking
import traceback
import json
import shutil

def addQemuNs(domXML):
    domain = domXML.getElementsByTagName('domain')[0]
    domain.setAttribute('xmlns:qemu',
                        'http://libvirt.org/schemas/domain/qemu/1.0')

def injectQemuCmdLine(domXML, qc):
    domain = domXML.getElementsByTagName('domain')[0]
    qctag = domXML.createElement('qemu:commandline')

    for cmd in qc:
        qatag = domXML.createElement('qemu:arg')
        qatag.setAttribute('value', cmd)

        qctag.appendChild(qatag)

    domain.appendChild(qctag)
domxml = hooking.read\_domxml()

# Get vm uuid, just in case

cur\_vm\_uuid = domxml.getElementsByTagName('uuid')[0].firstChild.nodeValue

macaddr = "94:de:80:ea:30:f5"
natname = "nat0"
params = '["-netdev","tap,ifname=%s,script=no,id=hostnet0,downscript=no","-device","virtio-net-pci,mac=%s,netdev=hostnet0,bus=pci.0,addr=0x10"]' % (natname, macaddr)
os.environ.\_\_setitem\_\_("qemu\_cmdline",params)

# Modify Qemu Parameter

if 'qemu\_cmdline' in os.environ:
    try:
        domxml = hooking.read\_domxml()

        qemu\_cmdline = json.loads(os.environ['qemu\_cmdline'])
        addQemuNs(domxml)
        injectQemuCmdLine(domxml, qemu\_cmdline)

        hooking.write\_domxml(domxml)
    except:
        sys.stderr.write('qemu\_cmdline: [unexpected error]: %s\\n'
                         % traceback.format\_exc())
        sys.exit(2)

Then you should start the vm WITHOUT ANY NIC if you are using nat.py.

title: "Migrate vm from ESXi to oVirt" date: 2014-04-25 categories: - "cloud-infra"


oVirt: v3.3, CentOS, 192.168.1.111 ESXi: 5.x, 192.168.1.135

For Windows vm, do this http://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE#VMware_to_Proxmox_VE_.28KVM.29 Before you begin make a copy of the VMware image. Remove VMware tools Start the Windows virtual machine on VMware and remove the VMware tools via the Windows control panel. Reboot. Enable IDE Start the Windows virtual machine on VMware and execute the mergeide.reg (File:Mergeide). Now the registry is changed that your Windows can boot from IDE, necessary for KVM. Make sure Atapi.sys, Intelide.sys, Pciide.sys, and Pciidex.sys are in the %SystemRoot%\System32\Drivers folder. If any are missing they can be extracted from %SystemRoot%\Driver Cache\I386\Driver.cab which can be opened in Windows file Exlorer like a directory and then the missing files can be copied out. see Microsoft KB article for details. Shutdown Windows.

  1. Create a nfs export domain on your oVirt datacenter.

/vdsm/export 0.0.0.0/0.0.0.0(rw)

  1. Install virt-v2v on CentOS and add authentication.

yum install virt-v2v

machine 192.168.1.135 login root password 1234567

Change permission of .netrc as saying of manual or you will get a warning with wrong permission.

chmod 600 ~/.netrc

  1. Import myvm. Make sure that your vm in ESXi is powered off.

virt-v2v -ic esx://192.168.1.135/?no_verify=1 -o rhev -os 192.168.1.111:/virtfan/export --network mgmtnet myvm

myvm_myvm: 100% [====================================================]D 0h04m48s

virt-v2v: myvm configured with virtio drivers.

  1. Import myvm from export domain to data domain.

Import.

import

Run.

run

"VDSM" "2012-10-29"

不喜欢java,就是不喜欢。。但还是要干的。。 4000多个文件构建UML图的话有点困难,都是小类,看了些tools的代码,然后转到vdsm内容,通讯方式是xml-rpc,当初我猜对了。

vdsm是oVirt的节点代理,功能可订制,也可移植到其他管理平台,基于KVM,储存VM各种暂态数据 vdsm提供的api通过xmlrpc使用,架构如下

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
------------+------VDSM-------+------+-----
            |                 |      |
    sysfs---+                 |      |
      LVM---+              libvirt   |
net-tools---+              qmp|      |Virtio serial
      xxx---+             ---+      +-------------/
           ...                |      |
                      -------/        \-------
                     /                        \
                    |  KVM-QEMU VM             |
                     \________________________/

主要包括vdsm,vdsm_cli,vds_bootstrap,vdsm_reg,vdsm_hooks 其中lifecycle hooks有针对vdsm和vm在before和after期间 bootstrap用于验证兼容性,包括网络,cpu,认证等,目前只支持RH自家产品