Volume / Disk Management
At this point, let's assume something has changed and we need additional storage (e.g. insanely popular with more people adding content). The initial setup left us with no room to expand our file system! We need to change this.
For this scenario, we are expecting storage space to require 3 GB. If they need 3 GB for storage, let's double that amount for the actual size. Here is the planned adjustments for each file system:
var = 6 GB (double the expected storage)
temp = 12 GB (double the www size)
backup = 12 GB (double the www size)
In the analysis and design section, we wanted to have some file systems smaller than the logical volume on which they sit. This will allow us to allocate additional space when needed. However, when we created the volumes, Ubuntu automatically expanded the file systems to the maximum size of the volume. Normally, this is OK...but we want a system that will allow growth when needed and ensure that we will have time to add additional hard drives BEFORE they are needed which will keep us from being stuck between a rock and a hard place! You do not want to lose a job because somebody did not estimate growth correctly or the budget did not allow for large capacity when the system first rolled out.
This design calls for the backup, var and temp file systems to be slightly smaller than the maximum space available for the logical volume in which they reside in.
So, let's make the logical volumes an extra 1 GB larger. This means that we will need an additional 23 GB of storage {(7 + 13 + 13 ) - (2 + 4 + 4}. If we add two drives that are 12 GB each, that will be just enough to cover our needs. (NOTE: This was an arbitrary number because I wanted to show you how to add 2 drives to the system)
Here is a graphical representation of what needs to be accomplished:
If we were to type
df -h right now, we should see something like this:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/LVG-root 3.9G 976M 2.7G 27% /
udev 993M 4.0K 993M 1% /dev
tmpfs 401M 232K 401M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 1002M 0 1002M 0% /run/shm
/dev/sda1 179M 25M 145M 15% /boot
/dev/mapper/LVG-bak 3.8G 121M 3.5G 4% /backup
/dev/mapper/LVG-temp 3.8G 121M 3.5G 4% /temp
/dev/mapper/LVG-var 1.9G 298M 1.5G 17% /var
Since I am running VMware, adding additional space is a snap. However, I will add it in such a way that Ubuntu will see 2 drives added to the system just as if we were to add 2 physical drives to a physical server.
- Shutdown and power off the server by typing shutdown -P now {ENTER}
- In the vSphere client, right-click the Virtual Machine and choose Edit Settings.
- On the hardware tab, click the Add button and select Hard Disk. Click Next, choose "Create a new virtual disk", click Next, set the size to 12 GB, click Next, Next, Finish.
- Add another 12 GB disk using the same steps above and click OK to close the settings and allow VMware to process the changes.
Collect information about the newly added drives.
- Start the Ubuntu server and connect using PuTTY.
- At the login prompt, login with your administrator account (administrator / myadminpass) and then type su and the root password (myrootpass)
- Type pvdisplay which should show something similar to this:
--- Physical volume ---
PV Name /dev/sda5
VG Name LVG
PV Size 19.81 GiB / not usable 3.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 5071
Free PE 760
Allocated PE 4311
PV UUID CJOZ2d-rhek-Dy95-UVuN-hAoR-Ao9q-nrScUv
The important bits of info here are the PV Name and VG Name for our existing configuration.
- Type fdisk -l which should show something similar to this (however I abbreviated it to show just the important parts):
Disk /dev/sda: 21.5 GB, 21474836480 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 391167 194560 83 Linux
/dev/sda2 393214 41940991 20773889 5 Extended
/dev/sda5 393216 41940991 20773888 8e Linux LVM
Disk /dev/sdb: 12.9 GB, 12884901888 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 12.9 GB, 12884901888 bytes
Disk /dev/sdc doesn't contain a valid partition table
The important bits of info here are the device paths for the new drives which are highlighted in red.
Prepare the first drive (/dev/sdb) to be used by the LVM
Type the following:
fdisk /dev/sdb
n (Create New Partition)
p (Primary Partition)
1 (Partition Number)
{ENTER} (use default for first cylinder)
{ENTER} (use default for last cylinder)
t (Change partition type)
8e (Set to Linux LVM)
p (Preview how the drive will look)
w (Write changes)
Prepare the second drive (/dev/sdc) to be used by the LVM
Do the exact same steps as above but start with
fdisk /dev/sdc
Create physical volumes using the new drives
If we type
fdisk -l, we now see /dev/sdb1 and /dev/sdc1 which are Linux LVM partitions.
Type the following to create physical volumes:
pvcreate /dev/sdb1
pvcreate /dev/sdc1
Now add the physical volumes to the volume group (LVG) by typing the following:
vgextend LVG /dev/sdb1
vgextend LVG /dev/sdc1
Now that the space of both drives have been added to the logical volume group called LVG, we can now allocate that space to grow the logical volume.
To get a list of volume paths to use in the next commands, type
lvscan to show your current volumes and their sizes.
Type the following to grow each volume by a specified amount (the number after the plus sign):
lvextend -L+5G /dev/LVG/var
lvextend -L+9G /dev/LVG/bak
lvextend -L+9G /dev/LVG/temp
or you can specify the exact size by excluding the plus sign and specifying the end-result size you want:
lvextend -L7G /dev/LVG/var
lvextend -L13G /dev/LVG/bak
lvextend -L13G /dev/LVG/temp
To see the new sizes, type
lvscan
The last thing to do now is the actual growth of the file systems. We want to grow the existing file systems but only to a certain amount so we do not take up all the space in the volume...we want room to grow in the future so we have time to order and install new drives when needed.
resize2fs /dev/LVG/var 6G
resize2fs /dev/LVG/bak 12G
resize2fs /dev/LVG/temp 12G
If we need to increase space in /var at a later point, we can issue the following command (without any downtime):
resize2fs /dev/LVG/var 6100MB
We could continue to increase this particular file system all the way until we reach the limit of the volume which is 13 GB at the moment.
Remember,
df -h will tell you the size of the file system and
lvscan will tell you the size of the volumes where the file systems live in.
If we were to type
df -h right now, we should see something like this:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/LVG-root 3.8G 904M 2.7G 26% /
udev 993M 4.0K 993M 1% /dev
tmpfs 401M 248K 401M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 1002M 0 1002M 0% /run/shm
/dev/sda1 179M 25M 145M 15% /boot
/dev/mapper/LVG-bak 12G 125M 12G 2% /var/backup
/dev/mapper/LVG-temp 12G 125M 12G 2% /var/temp
/dev/mapper/LVG-var 6.0G 301M 5.4G 6% /var
TIP: If you want to see everything in a specific block size, such as everything showing up in megabytes, you can use
df --block-size m