Description

We have all been in the situation where we ran out of space, and need to add another drive. Back before LVM, you would have to add a second drive, create a partition, and find a decent place to mount it. For example /var/lib/mysql. Not only do you have to get the new drive ready, but then you have to shut things down and copy the data to the secondary. Then update fstab, so it survives reboots. You will be happy to hear those days are over. It is a bit more complicated, but the space is just magically available for LVM to put wherever you need it.

In this example, I am using VMware. I provisioned a RHEL8 VM with a single 20GB disk. Once it was installed, I added a second 20GB disk.
 

[root@rhel8 ~]# fdisk -l | grep "Disk /dev/sd"
Disk /dev/sda: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
[root@rhel8 ~]#

/dev/sda is the disk that is currently holding my OS. /dev/sdb is the new disk.
 

Partition

Add a partition

[root@rhel8 ~]# fdisk /dev/sdb

Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x53fe0e1f.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-41943039, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-41943039, default 41943039):

Created a new partition 1 of type 'Linux' and of size 20 GiB.

Command (m for help): p
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x53fe0e1f

Device     Boot Start      End  Sectors Size Id Type
/dev/sdb1        2048 41943039 41940992  20G 83 Linux

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

[root@rhel8 ~]#

Above, we created a new partition on /dev/sdb. The new partition will be /dev/sdb1, since it is the first partition. In any of the fields where I didnt enter something, we went with the default. Default is to take up the entire disk. Now that we have a new partition, we need to tag it as an LVM partition(id 8e), since it is currently a Linux partition(id 83).
 

Tag the partition

[root@rhel8 ~]# fdisk /dev/sdb

Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): 8e
Changed type of partition 'Linux' to 'Linux LVM'.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

[root@rhel8 ~]# fdisk -l /dev/sdb
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x53fe0e1f

Device     Boot Start      End  Sectors Size Id Type
/dev/sdb1        2048 41943039 41940992  20G 8e Linux LVM
[root@rhel8 ~]#

As you can see above, our partition is created, and tagged as LVM. The tag honestly doesnt do anything other than letting other programs and people know what it is for. It is good practice to properly partiton and label things, even in the cloud.

Now that our partition is setup properly, lets move on to LVM.
 

LVM

Add PV to VG

[root@rhel8 ~]# pvcreate /dev/sdb1
  Physical volume "/dev/sdb1" successfully created.
[root@rhel8 ~]# vgs
  VG         #PV #LV #SN Attr   VSize   VFree
  rhel_rhel8   1   2   0 wz--n- <19.00g    0
[root@rhel8 ~]# pvs
  PV         VG         Fmt  Attr PSize   PFree
  /dev/sda2  rhel_rhel8 lvm2 a--  <19.00g      0
  /dev/sdb1             lvm2 ---  <20.00g <20.00g
[root@rhel8 ~]# vgextend rhel_rhel8 /dev/sdb1
  Volume group "rhel_rhel8" successfully extended
[root@rhel8 ~]# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  rhel_rhel8   2   2   0 wz--n- 38.99g <20.00g
[root@rhel8 ~]#

First, we have to “create” the physical volume(PV). So we pvcreate it, and show the space available. Then we vgextended the “rhel_rhel8” volume group(VG) to assign the PV to it. So now we see the volume group has 20G free.
 

Create a new Logical Volume(LV)

[root@rhel8 ~]# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  rhel_rhel8   2   2   0 wz--n- 38.99g <20.00g
[root@rhel8 ~]# lvs
  LV   VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root rhel_rhel8 -wi-ao---- <17.00g
  swap rhel_rhel8 -wi-ao----   2.00g
[root@rhel8 ~]#

As you can see the VG shows 20GB free. Lets create a new 1G volume, write an xfs file system to it, and mount it as /1gig_lv.
 

[root@rhel8 ~]# lvcreate -L 1G -n 1gig_lv rhel_rhel8
  Logical volume "1gig_lv" created.
[root@rhel8 ~]# lvs
  LV      VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  1gig_lv rhel_rhel8 -wi-a-----   1.00g
  root    rhel_rhel8 -wi-ao---- <17.00g
  swap    rhel_rhel8 -wi-ao----   2.00g
[root@rhel8 ~]# mkfs.xfs /dev/mapper/rhel_rhel8-1gig_lv
meta-data=/dev/mapper/rhel_rhel8-1gig_lv isize=512    agcount=4, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@rhel8 ~]# mkdir /1gig_lv
[root@rhel8 ~]# mount /dev/mapper/rhel_rhel8-1gig_lv /1gig_lv/
[root@rhel8 ~]# cd /1gig_lv/
[root@rhel8 1gig_lv]# ls -l
total 0
[root@rhel8 1gig_lv]# touch test
[root@rhel8 1gig_lv]# mount -l | grep 1gig
/dev/mapper/rhel_rhel8-1gig_lv on /1gig_lv type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
[root@rhel8 1gig_lv]#

The syntax of the lvcreate command could be a bit better. We used lvcreate to create a 1G volume, and we named it 1gig_lv, in the rhel_rhel8 volume group. We then used mkfs.xfs to write a brand new xfs file system to that LV. Now that it has a file system, it is mountable. So we mounted it on /1gig_lv. If you want this to be permanent, you need to make sure to add it to /etc/fstab. For this, I would add /dev/mapper/rhel_rhel8-1gig_lv /1gig_lv xfs defaults 0 0.
 

Extend an existing Logical Volume(LV)

[root@rhel8 1gig_lv]# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  rhel_rhel8   2   3   0 wz--n- 38.99g <19.00g
[root@rhel8 1gig_lv]# lvs
  LV      VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  1gig_lv rhel_rhel8 -wi-ao----   1.00g
  root    rhel_rhel8 -wi-ao---- <17.00g
  swap    rhel_rhel8 -wi-ao----   2.00g
[root@rhel8 1gig_lv]#

Above is our current pvs/vgs/lvs after the last exercise. Lets say now we want to extend our root LV by 1G.
 


[root@rhel8 1gig_lv]# lvextend -L +1G /dev/mapper/rhel_rhel8-root
  Size of logical volume rhel_rhel8/root changed from <17.00 GiB (4351 extents) to <18.00 GiB (4607 extents).
  Logical volume rhel_rhel8/root successfully resized.
[root@rhel8 1gig_lv]# xfs_growfs /
meta-data=/dev/mapper/rhel_rhel8-root isize=512    agcount=4, agsize=1113856 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=4455424, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 4455424 to 4717568
[root@rhel8 1gig_lv]#

Since there was free space in the volume group, we were able to extend the logical volume by 1G. Even if the LV is bigger, we still need to tell the file system that it has more room. Otherwise, we wont be able to reference this new space. So we called xfs_growfs, it is a bit unintuitive that you call it on the mount point, rather than the LV, but it is what it is.
 

Conclusion

[root@rhel8 1gig_lv]# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  rhel_rhel8   2   3   0 wz--n- 38.99g <18.00g
[root@rhel8 1gig_lv]# lvs
  LV      VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  1gig_lv rhel_rhel8 -wi-ao----   1.00g
  root    rhel_rhel8 -wi-ao---- <18.00g
  swap    rhel_rhel8 -wi-ao----   2.00g
[root@rhel8 1gig_lv]# pvs
  PV         VG         Fmt  Attr PSize   PFree
  /dev/sda2  rhel_rhel8 lvm2 a--  <19.00g      0
  /dev/sdb1  rhel_rhel8 lvm2 a--  <20.00g <18.00g
[root@rhel8 1gig_lv]# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  rhel_rhel8   2   3   0 wz--n- 38.99g <18.00g
[root@rhel8 1gig_lv]# lvs
  LV      VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  1gig_lv rhel_rhel8 -wi-ao----   1.00g
  root    rhel_rhel8 -wi-ao---- <18.00g
  swap    rhel_rhel8 -wi-ao----   2.00g
[root@rhel8 1gig_lv]#

Here is where we landed. We have assigned out 2G in total. One to a new LV, and another to extend the current root LV. The biggest thing to note here is your file system, and your block devices are handled in separate commands. You have to make sure your file system gets created on new LVs, and grown/extended on existing.
Good luck, hopefully this helps!