So-net無料ブログ作成

Setting Up A Highly Available NFS Server (1) [Linux-HA]

今日は自宅でHighly Available NFS Server構築です。
(詳細はこちらのサイトで)

4台のサーバーが必要です
Proxmox ver 1.6(http://www.proxmox.com/)を自宅では使っています。

LVM(Logical Volume Manager) running over DRBD.
LVMについてはこちらがとても参考になる。
DRBDについてはProxmoxのこちらも参考になる!

disklayer01.png

物理ディスクは2TBでそれぞれ別々のサーバーに設置してある。
DRDBレイヤーではクロスオーバーケーブルでそれぞれのディスクをつなげている。
別々のサーバーのデータをミラーリングすることがこれによって可能となっている。これはHigh Availability (HA=高可用性)には不可欠な構成だ。
この時OSではこのDRDBは一つのディスクとして認識している。
DRDB上にLVMを設定している。

This as it stands now is not meant to be a guide. It will eventually become one (hopefully). It is currently more a record of what we have done, what is working and what isn't...far more the latter at this point. Now back to the article.

LVM is not truly necessary for high availability (HA), however for our virtualized environment it has the advantage of letting us take snapshot backups of our VMs. This means that we do not need to have any down time in order to perform backups. Proxmox is also designed to work with LVM. When we create a new VM assigned to our storage volume, Proxmox creates a new LM for each virtual disk. This setup also allows us to live migrate VMs between the two servers in our cluster.

私たちの環境はProxmox Clusterで上記の構成サーバーで動作しています。
私たちの取りあえずのゴールはLoadbalanced High-Availability Web Serverを構成することですが、その前に、Highly Available NFS Serverを準備します。

loadbalancer01.png

次ぎにいく前にOpenVZについてを知っておくといい。
Wikiには次のように説明がある。OpenVZ は、Linuxカーネルをベースに開発されたOSレベルのサーバ仮想化ソフト。1つの物理サーバ上に複数の独立したOSインスタンスを作成することができる。 ハイパーバイザー型(ハードウェアレベル)の仮想化ソフトであるVMwareやXenに比べて稼動させる環境数で勝っているが、swapが使えない為、与えられたメモリ容量をオーバーするとプロセスを強制終了させてしまうなどの理由で、VPSサービス利用者からの評判はあまり良くない。 また、Linux以外の環境(Windowsなど)を動作させることはできない。
Official site is
http://wiki.openvz.org/Main_Page

We are going to build our HA NFS as per the diagram above and have it provide common storage for our web servers. As we are limited to two physical machines this presents a couple of challenges. With Proxmox we have the options of two types of VMs. The first type is the "container" type, based on OpenVZ. The second type is the fully virtualized variety using KVM. There are advantages and disadvantages to both. OpenVZ based machines are basically a virtualized OS and have no hypervisor layer. This means that there is very little overhead and the VMs created in this fashion operate at near "bare metal" speeds. The fully virtualized systems are slower, especially considering io access (disk access/networking). You would think that for a NFS server the container method would be better due to these reasons, however the OpenVZ containers cannot access raw disks outside of their containers. This means that there will be no way to mount our storage volumes and therefore we are unable to use this option. We can partially mitigate the io issue by using paravirtualized drivers (virtio) for our KVM builds. We'll use containers for our load balancers.

So let's get started!

First let's grab the latest version of Debian for our platform. If you are in Japan I find this mirror is fast (I can get the CD in about 5 minutes) .

http://ftp.riken.jp/Linux/debian/debian-cd/current/

Just grab the first CD for your architecture and let's begin. If you are new to Proxmox, we aren't going to get into setting it up here, but the basic install on a new machine is a breeze.

Let's login to the Proxmox web management interface.

proxmox home screen.png

Here is a brief view of our setup. We have the one master and the one node. It is called asobihost for historical reasons...and I just couldn't bring myself to call it asobislave. I kind of wanted to...but cooler heads prevailed. Anyhow in the time it took to write this the iso image cam down, and we first need to upload it to Proxmox by using the Iso Images link on the left. Once that is done go to the Virtual Machines link, hit create and select Fully Virtualized (we are going to build our NFS VMs first)

NFS Creation.png

Here we have chosen our installation media. Our disk storage is our LVM group which I have named VMcluster and we have chosen 512MB of RAM and just 8GMB of space as we will be adding LVMs from the host later. Another nice thing about Proxmox, is that it is possible to adjust the RAM and the CPU limits on the fly. Though if you are planning on using more than one core, bet to set that up during the build.

The other main items to note are the disk type and the network card. In both cases I have chosen the virtio option as these drivers offer the best performance. For networking, the e1000 option is also a good choice for the network card. Click create and you will get a message saying the machine has been created. Go back to the Virtual Machines link and start your new NFS to be up. Click on the VM and select Open VNC Console and you are ready to do the initial install of Debian.

Debian Install.png

I am not going to walk you through the Debian install except to say this write-up will be based on the "standard system" option, but get the first one started, then create the 2nd machine and you can start installing it as well. Once the core is installed and it is downloading packages is a good time for a coffee.

While drinking your coffee and waiting, I can tell you about the other advantage of OpenVZ. It is based on templates. Once the template is created, you just upload it and you are done. No waiting (coffee optional). Templates are available from OpenVZ as well as Proxmox, or you can roll your own.

Okay now the install on the first VM is finished. The first thing to do is set the static IPs and install openssh so we can get out of this clunky VNC window and into a a secure shell.

Edit your interfaces file to set the address using your editor of choice. I use vi...have been using it for years...still don't know how to use it properly.

vi /etc/networking/interfaces

Your final file should look something like this.
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface auto loiface lo inet loopback

# The primary network interface
#allow-hotplug eth0
#iface eth0 inet dhcp
auto eth0

iface eth0 inet static
address 192.168.your.address
netmask 255.255.255.0
gateway 192.168.your.gateway


then

/etc/init.d/networking restart

you can confirm your new setting by looking at the output of ifconfig
I had to edit my sources.list to remove the cd reference so

vi /etc/apt/sources.list

and comment out the CD reference if it is in there.
Time to install ssh server and upgrade the system.

apt-get update
apt-get install openssh-server
apt-get upgrade


Then try and ssh in. If it works then go ahead and config the second machine in the same way.
Now onto the scary part. We currently have a VM running on a virtual disk created in a logical volume, in a volume group on top of DRBD. I am still a newbie when it comes to LVM and DRBD. But there are a few things to remember. The DRBD past should just be like a hard disk, the volume groups and logical volumes should be similar to partitions. There are commands to view what is going on so on the host machine (asobimaster) lets look at the following.

pvdisplay shows us the physical device information
--- Physical volume --- PV Name /dev/drbd0
VG Name drbdvg
PV Size 1.82 TB / not usable 312.00 KB
Allocatable yes
PE Size (KByte) 4096
Total PE 476917
Free PE 139509
Allocated PE 337408
PV UUID iS7MpA-Cnbs-MhlE-2XsM-4AY8-50Xo-Npabpa


the /dev/drbd0 can be viewed just the same as a /dev/sda device. It is the block device i created when setting up DRBD.

vgdisplay yields
--- Volume group ---
VG Name drbdvg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 26
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 7
Open LV 6
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.82 TB
PE Size 4.00 MB
Total PE 476917
Alloc PE / Size 337408 / 1.29 TB
Free PE / Size 139509 / 544.96 GB
VG UUID X3CjKm-WQy1-nAyw-bdeU-LBJG-cb5p-jxNeuN

and lvdisplay shows us

--- Logical volume ---
LV Name /dev/drbdvg/vm-113-disk-1
VG Name drbdvg
LV UUID WBHb9s-wLZy-hu5z-nGJa-kt2T-EFpE-1Alv8C
LV Write Access read/write
LV Status available
# open 1
LV Size 8.00 GB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:8


--- Logical volume ---
LV Name /dev/drbdvg/vm-114-disk-1
VG Name drbdvg
LV UUID 1YLQ17-ZkJg-VhdG-HVaB-hvdu-eT66-0GktyI
LV Write Access read/write
LV Status available
# open 1
LV Size 8.00 GB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:9


This last output has been abbreviated, but these are the two logical volumes for our two recently created VMs.

When I decided I wanted to try the NFS over DRBD, I thought I had already allocated my entire space to the Virtual Machines in the storage group drbdvg. I was searching for ways of resizing my Volume Group, or LVMs for the longest time, but then I finally realized I was looking at it the wrong way. With the way it works, I should be able to just create a logical volume for my needs inside the already existing volume group. Being uncertain, I just went for it with the command

lvcreate -L 1200G -n storage drbdvg

and it seemed to work
--- Logical volume ---
LV Name /dev/drbdvg/storage
VG Name drbdvg
LV UUID VdS6PZ-pWfa-f6Sc-CNMd-zPSd-SU6V-24PorN
LV Write Access read/write
LV Status available
# open 1
LV Size 1.17 TB
Current LE 307200
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:7

So I "formatted" it with the command

mkfs -t ext3 -m 1 -v /dev/drbdvg/storage

and just like formatting a partition, it seemed to work.
So now the question became, how can I access this volume from the Virtual machine that resides on this host, because without that, this whole endeavour has been for naught. Again, thinking of it simply in terms of physical devices, I thought I would simply try to add the volume to the virtual machine as a hard disk, so after shutting down the guest, on the host (asobimaster, do this on the node you created the VM on) I typed

(note the following initially seems that it will work, but upon testing the next day it is shown not to work properly. If you are interested then feel free to continue, but I don't recommend trying this. The problem gets resolved in the part 2)

qm set 113 -virtio1 /dev/drbdvg/storage

this is the same way I would add a physical drive such as /dev/sdb, where 113 is our virtual machine id (vmid) -virtio1 is the kind (ide, scsi) and the number (our boot disk is virtio0).
And it seemed to work! (after failing the first time with the error "unable to read parameters" because i tried it on the wrong node)
Now I am getting excited, so I fired up the guest and checked the /dev directory

ls /dev/vd*

and sure enough I saw
/dev/vdb in there along with all of the /dev/vda items.

Now this seems a little odd as partitions are associated with a number (i.e. /dev/vda1) and I had already created a file system, but I figured I would try to mount it and see.

mkdir /test
mount -t ext3 /dev/vdb /test
ls /test


and lo and behold, in there I find "lost+found"

after creating a directory the lost and found disappeared.

mkdir /test/testdir

but it seems that this is working as I can see my testdir in there.
As I said, I am still a newbie with LVM and DRBD so I have no idea if this setup is right, or if in a week I will find that my entire infrastructure comes crashing down around my shoulders, but for now it is enough.
And with that, I am off to bed. Will pick this up tomorrow.




nice!(0)  コメント(0)  トラックバック(0) 
共通テーマ:パソコン・インターネット

nice! 0

コメント 0

コメントを書く

お名前:
URL:
コメント:
画像認証:
下の画像に表示されている文字を入力してください。

※ブログオーナーが承認したコメントのみ表示されます。

トラックバック 0

この広告は前回の更新から一定期間経過したブログに表示されています。更新すると自動で解除されます。

×

この広告は1年以上新しい記事の更新がないブログに表示されております。