niedziela, 16 października 2011

Linux: Sharing disks via iSCSI on Ubuntu 11.04

Recently I wanted to test a clustering solution that was based on a shared storage and I was looking for a solution that could work in my virtualized environment. One requirement was that the shared disk had to be visible as device (not a mounted NFS share). The choice went to iSCSI. There is quite interesting open source solution for providing NAS functionality (http://www.freenas.org/) but since it is based on FreeBSD 8.2 and my native system in Linux I would have to run in as another virtual machine - might be too much for my box (one FreeNAS virtual machine + two virtual machines hosting cluster - Figure 1). Therefore I searched for something that could be configured natively on my Ubuntu 11.04 box. What I found and decided to configure was the iscsitarget daemon - below you can the step by step instruction how to do that.


Figure 1 iSCSI client-server architecture




iSCSI Server configuration - Ubuntu

First of all you need to install the iscsitarget software - it is available in the standard Ubuntu repo - as below:

# apt-get install iscsitarget
Reading package lists... Done
Building dependency tree     
Reading state information... Done
Suggested packages:
  iscsitarget-source iscsitarget-dkms
Recommended packages:
  iscsitarget-module
The following NEW packages will be installed:
  iscsitarget
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 78.6 kB of archives.
After this operation, 291 kB of additional disk space will be used.
Get:1 http://us.archive.ubuntu.com/ubuntu/ natty/universe iscsitarget amd64 1.4.20.2-1ubuntu1 [78.6 kB]
Fetched 78.6 kB in 0s (128 kB/s)   
Selecting previously deselected package iscsitarget.
(Reading database ... 315227 files and directories currently installed.)
Unpacking iscsitarget (from .../iscsitarget_1.4.20.2-1ubuntu1_amd64.deb) ...
Processing triggers for ureadahead ...
ureadahead will be reprofiled on next reboot
Processing triggers for man-db ...
Setting up iscsitarget (1.4.20.2-1ubuntu1) ...
 * iscsitarget not enabled in "/etc/default/iscsitarget", not starting...

Define the LUNs in the configuration file as follows (red colour marks the location of the LUNs):

# vim /etc/iet/ietd.conf
...
Target ubuntu.mediate:storage.sys1
        Lun 0 Path=/luns/storagelun0,Type=fileio,ScsiId=lun0,ScsiSN=lun0
        Lun 1 Path=/luns/storagelun1,Type=fileio,ScsiID=lun1,ScsiSN=lun

Create the files that will represent the LUNs. One can also use devices (e.g. USB stick) as placeholders for LUNs but for me files were just perfect - easy to move and control.

# cd /luns/
krychu@krystianek:/luns$ sudo dd if=/dev/zero of=storagelun0 count=0 obs=1 seek=10G
0+0 records in
0+0 records out
0 bytes (0 B) copied, 9.01e-06 s, 0.0 kB/s
krychu@krystianek:/luns$ ls -latr
total 8
drwxr-xr-x 26 root root        4096 2011-10-15 10:39 ..
-rw-r--r--  1 root root 10737418240 2011-10-15 10:40 storagelun0
drwxr-xr-x  2 root root        4096 2011-10-15 10:40 .
krychu@krystianek:/luns$ ls -lh
total 0
-rw-r--r-- 1 root root 10G 2011-10-15 10:40 storagelun0
krychu@krystianek:/luns$ sudo dd if=/dev/zero of=storagelun1 count=0 obs=1 seek=1G
0+0 records in
0+0 records out
0 bytes (0 B) copied, 1.285e-05 s, 0.0 kB/s

Next enable the iscsitarget in the default configuration file - modify the /etc/default/iscsitarget file's content so that it matches the one below:

# cat /etc/default/iscsitarget
ISCSITARGET_ENABLE=true


Start the iscsitarget service:

# service iscsitarget start
 * Starting iSCSI enterprise target service                              [ OK ]
                                                                         [ OK ]

Ok, that's it - the iSCSI should be configured to publish two LUNs (storagelun0 and storagelun1) from the /luns directory. Next step is to configure the client machines.


iSCSI Client- CentOS 6.0 on kvm

First of all check if the required software is installed: iscsi-initiator-utils (in my case it was). If it is not then install it from the standard repository as follows:

[root@localhost ~]# yum install iscsi-initiator-utils
...

Start and enable the iscsi and multipathd service during boot of the system:

[root@localhost ~]# service iscsi start
[root@localhost ~]# chkconfig --list iscsi
iscsi              0:off    1:off    2:on    3:on    4:on    5:on    6:off
[root@localhost ~]# chkconfig --list multipathd
multipathd         0:off    1:off    2:off    3:off    4:off    5:off    6:off
[root@localhost ~]# chkconfig --add multipathd
[root@localhost ~]# chkconfig --list multipathd
multipathd         0:off    1:off    2:off    3:off    4:off    5:off    6:off
[root@localhost ~]# chkconfig multipathd on
[root@localhost ~]# service multipathd start
Starting multipathd daemon:                                [  OK  ]

Now you can perform the discovery of the available iSCSI targets:

[root@localhost ~]# iscsiadm -m discovery -t st -p 192.168.122.1:3260
192.168.122.1:3260,1 ubuntu.mediate:storage.sys1
192.168.1.133:3260,1 ubuntu.mediate:storage.sys1
192.168.100.1:3260,1 ubuntu.mediate:storage.sys1
192.168.101.1:3260,1 ubuntu.mediate:storage.sys1

Next connect to the target. There two options you either specify the target (name, IP) and in that case the tooling will login to only this target or you leave it unspecified and you will be connected to all targets. In this manual I will use the first approach.

[root@localhost ~]# iscsiadm -m node -l -T ubuntu.mediate:storage.sys1 -p 192.168.122.1:3260
Logging in to [iface: default, target: ubuntu.mediate:storage.sys1, portal: 192.168.122.1,3260]
Login to [iface: default, target: ubuntu.mediate:storage.sys1, portal: 192.168.122.1,3260] successful.

Ok, now let's get to the multipath configuration. First of all copy the example configuration file to the /etc directory and restart the multipathd daemon:

[root@localhost ~]# cp /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.synthetic /etc/
[root@localhost ~]# multipath -v2
Oct 15 11:01:38 | /lib/udev/scsi_id exitted with 1
Oct 15 11:01:38 | /lib/udev/scsi_id exitted with 1
[root@localhost ~]# multipath -ll
149455400000000006c756e31000000000000000000000000 dm-3 IET,VIRTUAL-DISK
size=1.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 6:0:0:1 sdb 8:16  active ready  running
149455400000000006c756e30000000000000000000000000 dm-2 IET,VIRTUAL-DISK
size=10G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 6:0:0:0 sda 8:0   active ready  running

[root@localhost ~]# ls -l /dev/mapper/total 0
lrwxrwxrwx 1 root root      7 Oct 15 11:01 149455400000000006c756e30000000000000000000000000 -> ../dm-2
lrwxrwxrwx 1 root root      7 Oct 15 11:01 149455400000000006c756e31000000000000000000000000 -> ../dm-3
crw-rw---- 1 root root 10, 58 Oct 15 09:52 control
lrwxrwxrwx 1 root root      7 Oct 15 09:52 vg_centos6hosta-lv_root -> ../dm-0
lrwxrwxrwx 1 root root      7 Oct 15 09:52 vg_centos6hosta-lv_swap -> ../dm-1

Ok, now let us configure the multipath daemon so that the device is always available as an alias (e.g. ha-mediate under /dev/mapper/ha-mediate). Add the following section to the /etc/multipath.conf file and restart the multipathd daemon:

[root@localhost ~]# cat /etc/multipath.conf
##
## This is a template multipath-tools configuration file
## Uncomment the lines relevent to your environment
##
multipaths {
    multipath {
        wwid            149455400000000006c756e30000000000000000000000000
        alias            ha-mediate
        path_grouping_policy    multibus
        path_selector        "round-robin 0"
        failback        manual
        rr_weight        priorities
        no_path_retry        5
        rr_min_io        100
    }
}

[root@localhost ~]# service multipathd restart
Stopping multipathd daemon:                                [  OK  ]
Starting multipathd daemon:                                [  OK  ]
[root@localhost ~]# multipath -ll
149455400000000006c756e31000000000000000000000000 dm-3 IET,VIRTUAL-DISK
size=1.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 6:0:0:1 sdb 8:16  active ready  running
ha-mediate (149455400000000006c756e30000000000000000000000000) dm-2 IET,VIRTUAL-DISK
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 6:0:0:0 sda 8:0   active ready  running
[root@localhost ~]# ls -l /dev/mapper/
total 0
lrwxrwxrwx 1 root root      7 Oct 15 11:05 149455400000000006c756e31000000000000000000000000 -> ../dm-3
crw-rw---- 1 root root 10, 58 Oct 15 09:52 control
lrwxrwxrwx 1 root root      7 Oct 15 11:05 ha-mediate -> ../dm-2
lrwxrwxrwx 1 root root      7 Oct 15 09:52 vg_centos6hosta-lv_root -> ../dm-0
lrwxrwxrwx 1 root root      7 Oct 15 09:52 vg_centos6hosta-lv_swap -> ../dm-1

As you see now the device is available under it's alias - in my case ha-mediate. So that's it - now you can create a filesystem on a device, mount it and start using it ;)

If you mount the iSCSI target only from one server you can create a cluster unaware filesystem like etx3, ext4 (example below). However for granting access from multiple servers a cluster-aware filesystem has to be created.

[root@localhost ~]# mkfs.ext4 /dev/mapper/ha-mediate
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
655360 inodes, 2621440 blocks
131072 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Writing inode tables: done                           
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 34 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@localhost ~]# mkdir /mnt/tmp
[root@localhost ~]# mount /dev/mapper/ha-mediate /mnt/tmp/
[root@localhost ~]# ls -l /mnt/tmp/
total 16
drwx------ 2 root root 16384 Oct 15 11:06 lost+found

Brak komentarzy:

Prześlij komentarz