Koozali – Virtual Private Server – Extending storage (/dev/vda) using LVM and RAID (!)

1. Koozali SME server v9 standard installation on a Virtual Private Server.
2. “sme noraid nolvm” installation options were not used.
3. I just now changed the Virtual Private Server plan from 20GB to 50GB storage space.

This is how you can increase the root partition size of your Koozali SME server v9


[root@f0002 ~]# sfdisk -l

Disk /dev/vda: 104025 cylinders, 16 heads, 63 sectors/track
Units = cylinders of 516096 bytes, blocks of 1024 bytes, counting from 0

104025 * 516096 / 1024 / 1024 /1024 = 50GB

This is correct.

ATTEMPT #1 – THIS IS HOW NOT TO DO IT!

[root@f0002 ~]# fdisk /dev/vda

Command (m for help): p

Disk /dev/vda: 53.7 GB, 53687091200 bytes
16 heads, 63 sectors/track, 104025 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x3ac5a058

Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *           3         510      256000   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/vda2             510       41611    20714496   fd  Linux raid autodetect
Partition 2 does not end on cylinder boundary.

Command (m for help): n

Command (m for help): p

Disk /dev/vda: 53.7 GB, 53687091200 bytes
16 heads, 63 sectors/track, 104025 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x3ac5a058

Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *           3         510      256000   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/vda2             510       41611    20714496   fd  Linux raid autodetect
Partition 2 does not end on cylinder boundary.

Command (m for help): n

Command action
e   extended
p   primary partition (1-4)
p

Partition number (1-4): 3

First cylinder (1-104025, default 1): 41612

Last cylinder, +cylinders or +size{K,M,G} (41612-104025, default 104025):104025

(104025 – 41612) * 516096 / 1024 / 1024 / 1024 = 30GB This is correct.

Command (m for help): p

Disk /dev/vda: 53.7 GB, 53687091200 bytes
16 heads, 63 sectors/track, 104025 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x3ac5a058

Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *           3         510      256000   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/vda2             510       41611    20714496   fd  Linux raid autodetect
Partition 2 does not end on cylinder boundary.
/dev/vda3           41612      104025    31456656   83  Linux

Command (m for help): t
Partition number (1-4): 3
Hex code (type L to list codes): fd
Changed system type of partition 3 to fd (Linux raid autodetect)

Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *           3         510      256000   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/vda2             510       41611    20714496   fd  Linux raid autodetect
Partition 2 does not end on cylinder boundary.
/dev/vda3           41612      104025    31456656   fd  Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

[root@f0002 ~]# reboot

Broadcast message from [user@hostname]
(/dev/pts/0) at 22:41 …

The system is going down for reboot NOW!

[root@f0002 ~]# sfdisk -l

Disk /dev/vda: 104025 cylinders, 16 heads, 63 sectors/track
Units = cylinders of 516096 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start     End   #cyls    #blocks   Id  System
/dev/vda1   *      2+    509-    508-    256000   fd  Linux raid autodetect
start: (c,h,s) expected (2,0,33) found (0,32,33)
end: (c,h,s) expected (509,15,31) found (31,254,31)
/dev/vda2        509+  41610-  41101-  20714496   fd  Linux raid autodetect
start: (c,h,s) expected (509,15,32) found (31,254,32)
end: (c,h,s) expected (1023,15,63) found (1023,254,63)
/dev/vda3      41611  104024   62414   31456656   fd  Linux raid autodetect
/dev/vda4          0       –       0          0    0  Empty

Disk /dev/md1: 5174528 cylinders, 2 heads, 4 sectors/track

Disk /dev/mapper/main-root: 2319 cylinders, 255 heads, 63 sectors/track

Disk /dev/mapper/main-swap: 257 cylinders, 255 heads, 63 sectors/track

Disk /dev/md0: 63984 cylinders, 2 heads, 4 sectors/track

[root@f0002 ~]# mdadm -D /dev/md1
/dev/md1:
Version : 1.1
Creation Time : Fri Nov 18 13:32:51 2016
Raid Level : raid1
Array Size : 20698112 (19.74 GiB 21.19 GB)
Used Dev Size : 20698112 (19.74 GiB 21.19 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Tue May  9 22:46:00 2017
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Name : localhost.localdomain:1
UUID : 08f2b956:177fd399:e5ffa7a4:c5d989e5
Events : 6093272

Number   Major   Minor   RaidDevice State
0     252        2        0      active sync   /dev/vda2
2       0        0        2      removed

[root@f0002 ~]# mdadm –add /dev/md1 /dev/vda3
mdadm: added /dev/vda3

[root@f0002 ~]# mdadm -D /dev/md1
/dev/md1:
Version : 1.1
Creation Time : Fri Nov 18 13:32:51 2016
Raid Level : raid1
Array Size : 20698112 (19.74 GiB 21.19 GB)
Used Dev Size : 20698112 (19.74 GiB 21.19 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Tue May  9 22:48:57 2017
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1

Rebuild Status : 20% complete

Name : localhost.localdomain:1
UUID : 08f2b956:177fd399:e5ffa7a4:c5d989e5
Events : 6093366

Number   Major   Minor   RaidDevice State
0     252        2        0      active sync   /dev/vda2
2     252        3        1      spare rebuilding   /dev/vda3

 

No; this is not the correct approach.  Let’s undo our changes and try again.

[root@f0002 ~]# mdadm –remove  /dev/md1 /dev/vda3
mdadm: hot remove failed for /dev/vda3: Device or resource busy

[root@f0002 ~]# mdadm –fail  /dev/md1 /dev/vda3
mdadm: set /dev/vda3 faulty in /dev/md1

[root@f0002 ~]# mdadm –remove  /dev/md1 /dev/vda3
mdadm: hot removed /dev/vda3 from /dev/md1

[root@f0002 ~]# fdisk /dev/vda

WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).

Command (m for help): d
Partition number (1-4): 3

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

[root@f0002 ~]# reboot
Broadcast message from [user@hostname]
(/dev/pts/0) at 23:05 …

The system is going down for reboot NOW!

ATTEMPT #2 – THIS IS HOW TO DO IT!

[root@f0002 ~]# mdadm -D /dev/md1
/dev/md1:
Version : 1.1
Creation Time : Fri Nov 18 13:32:51 2016
Raid Level : raid1
Array Size : 20698112 (19.74 GiB 21.19 GB)
Used Dev Size : 20698112 (19.74 GiB 21.19 GB)

This is the 20GB root partition.  I need to change the size of /dev/vda2 and then grow the size of the array.

Using fdisk you first delete the partition and then recreate it using max boundaries.

When saving, the partition is not deleted, only the boundaries are moved.

[root@f0002 ~]# fdisk /dev/vda

WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).

Command (m for help): p

Disk /dev/vda: 53.7 GB, 53687091200 bytes
16 heads, 63 sectors/track, 104025 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x3ac5a058

Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *           3         510      256000   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/vda2             510       41611    20714496   fd  Linux raid autodetect
Partition 2 does not end on cylinder boundary.

So let’s try to resize /dev/vga2 !!  This is the scary bit:

Command (m for help): d
Partition number (1-4): 2

Command (m for help): n
Command action
e   extended
p   primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (1-104025, default 1): 510
Last cylinder, +cylinders or +size{K,M,G} (510-104025, default 104025): 104025

Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help): p

Disk /dev/vda: 53.7 GB, 53687091200 bytes
16 heads, 63 sectors/track, 104025 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x3ac5a058

Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *           3         510      256000   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/vda2             510      104025    52171576   fd  Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[root@f0002 ~]# reboot

Broadcast message from [user@hostname]
(/dev/pts/0) at 23:11 …

The system is going down for reboot NOW!

[root@f0002 ~]# sfdisk -l

Disk /dev/vda: 104025 cylinders, 16 heads, 63 sectors/track
Units = cylinders of 516096 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start     End   #cyls    #blocks   Id  System
/dev/vda1   *      2+    509-    508-    256000   fd  Linux raid autodetect
start: (c,h,s) expected (2,0,33) found (0,32,33)
end: (c,h,s) expected (509,15,31) found (31,254,31)
/dev/vda2        509+ 104024  103516-  52171576   fd  Linux raid autodetect
/dev/vda3          0       –       0          0    0  Empty
/dev/vda4          0       –       0          0    0  Empty

Disk /dev/md1: 5174528 cylinders, 2 heads, 4 sectors/track

Disk /dev/mapper/main-root: 2319 cylinders, 255 heads, 63 sectors/track

Disk /dev/mapper/main-swap: 257 cylinders, 255 heads, 63 sectors/track

Disk /dev/md0: 63984 cylinders, 2 heads, 4 sectors/track

103516 * 516096 / 1024 / 1024 /1024 = 50GB.  This is correct.  It worked!!

Now we simply need to grow our RAID1 volume:

[root@f0002 ~]# mdadm –grow /dev/md1 –size=max
mdadm: component size of /dev/md1 has been set to 52155192K

This is 50GB; this is correct.

[root@f0002 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/main-root
18G   16G  1.4G  92% /
tmpfs                 499M     0  499M   0% /dev/shm
/dev/md0              239M   55M  172M  24% /boot

Hmmm…the root partition is still 20GB in size instead of 50GB.  Note that /dev/mapper/main-root is a Logical Volume Manager (LVM) volume.

LVM sits above the physical device /dev/vda and RAID1 /dev/md1 and provides flexibility for making multiple physical partitions appear as a single logical volume to the operating system.  LVM can be used to make 2 partitions on 2 hard disk drives appear as a single volume for example.

As we have already resized /dev/vga and grown /dev/md1 accordingly, LVM only gets in our way here.  This is how to grow the LVM volume /dev/mapper/main-root:

[root@f0002 ~]# lvdisplay
— Logical volume —
LV Path                /dev/main/root
LV Name                root
VG Name                main
LV UUID                zw22OW-lRnz-CIIB-9eZr-JC83-JjZd-xORwsd
LV Write Access        read/write
LV Creation host, time localhost.localdomain, 2016-11-18 13:32:52 +1000
LV Status              available
# open                 1
LV Size                17.77 GiB
Current LE             4548
Segments               1
Allocation             inherit
Read ahead sectors     auto
– currently set to     256
Block device           253:0

In order to increase the LV, we will first need to increase the VG:

[root@f0002 ~]# vgdisplay
— Volume group —
VG Name               main
System ID
Format                lvm2
Metadata Areas        1
Metadata Sequence No  3
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                2
Open LV               2
Max PV                0
Cur PV                1
Act PV                1
VG Size               19.74 GiB
PE Size               4.00 MiB
Total PE              5053
Alloc PE / Size       5052 / 19.73 GiB
Free  PE / Size       1 / 4.00 MiB
VG UUID               RIUaPL-IkLV-EaX5-WSJ2-XUMb-eP5i-e5lPh3

But in order to increase the VG, we will first need to increase the PV:

[root@f0002 ~]# pvdisplay
— Physical volume —
PV Name               /dev/md1
VG Name               main
PV Size               19.74 GiB / not usable 0
Allocatable           yes
PE Size               4.00 MiB
Total PE              5053
Free PE               1
Allocated PE          5052
PV UUID               ewfPTD-023T-28rU-wDzs-uvO9-s0Ho-QzCf7n

[root@f0002 ~]# pvresize /dev/md1
Physical volume “/dev/md1” changed
1 physical volume(s) resized / 0 physical volume(s) not resized

[root@f0002 ~]# pvdisplay
— Physical volume —
PV Name               /dev/md1
VG Name               main
PV Size               49.74 GiB / not usable 3.80 MiB
Allocatable           yes
PE Size               4.00 MiB
Total PE              12732
Free PE               7680
Allocated PE          5052
PV UUID               ewfPTD-023T-28rU-wDzs-uvO9-s0Ho-QzCf7n

[root@f0002 ~]# vgdisplay
— Volume group —
VG Name               main
System ID
Format                lvm2
Metadata Areas        1
Metadata Sequence No  5
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                2
Open LV               2
Max PV                0
Cur PV                1
Act PV                1
VG Size               49.73 GiB
PE Size               4.00 MiB
Total PE              12732
Alloc PE / Size       5052 / 19.73 GiB
Free  PE / Size       7680 / 30.00 GiB
VG UUID               RIUaPL-IkLV-EaX5-WSJ2-XUMb-eP5i-e5lPh3

[root@f0002 ~]# lvdisplay
— Logical volume —
LV Path                /dev/main/root
LV Name                root
VG Name                main
LV UUID                zw22OW-lRnz-CIIB-9eZr-JC83-JjZd-xORwsd
LV Write Access        read/write
LV Creation host, time localhost.localdomain, 2016-11-18 13:32:52 +1000
LV Status              available
# open                 1
LV Size                17.77 GiB
Current LE             4548
Segments               1
Allocation             inherit
Read ahead sectors     auto
– currently set to     256
Block device           253:0

[root@f0002 ~]# lvextend -l +100%FREE /dev/main/root
Size of logical volume main/root changed from 17.77 GiB (4548 extents) to 47.77 GiB (12228 extents).
Logical volume root successfully resized.

Now we can finally resize the actual file system sitting on top:

[root@f0002 ~]# resize2fs /dev/mapper/main-root
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/mapper/main-root is mounted on /; on-line resizing required
old desc_blocks = 2, new_desc_blocks = 3
Performing an on-line resize of /dev/mapper/main-root to 12521472 (4k) blocks.
The filesystem on /dev/mapper/main-root is now 12521472 blocks long.

[root@f0002 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/main-root
47G   16G   30G  34% /
tmpfs                 499M     0  499M   0% /dev/shm
/dev/md0              239M   55M  172M  24% /boot

The root partition has been successfully increased from 20GB to 50GB.

Koozali – Virtual Private Server – Extending storage (/dev/vda) using LVM and RAID (!) was last modified: May 9th, 2017 by tabcom