# Disks and partitions

# LVM - Creation of first physical volume, volume group and logical group.

Author does not take any responsibilities on the guide below.

Create new physical volume, volume group and logical group.

Mount it to freshly created directory.

Add to fstab for automatic mount during boot process.

```bash
apt install \
    lvm2

dnf install
    lvm2

```

Define variable

```bash
hostname
export host="$(hostname)"

lsblk

export dev="/dev/sdc"
export size="9.9G"

```

Understand which is filesystem type mainly used on the system

```bash
df -hT

```

[![](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2026-03/scaled-1680-/OEIa01xq9WswuY0J-image-1773573237831.png)](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2026-03/OEIa01xq9WswuY0J-image-1773573237831.png)

In my case it is 'ext4', I shall continue in similarity to this. Redhat family distros might want to use 'xfs'

```bash
export fs="ext4"
# export fs="xfs"

pvcreate ${dev}

vgcreate vg-${host}-data ${dev}

lvcreate -L ${size} -n lv-${host}-data vg-${host}-data

lvscan

```

[![](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2026-03/scaled-1680-/TtzmmTitVbpyOFCc-image-1773573394408.png)](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2026-03/TtzmmTitVbpyOFCc-image-1773573394408.png)

```bash
mkfs.${fs} /dev/vg-${host}-data/lv-${host}-data
mkdir -p /mnt/${host}-data
mount /dev/vg-${host}-data/lv-${host}-data /mnt/${host}-data
df -h

```

[![](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2026-03/scaled-1680-/1Mq8kdcIcvTlX5eA-image-1773573445878.png)](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2026-03/1Mq8kdcIcvTlX5eA-image-1773573445878.png)

```bash
#TODO: add into /fstab with oneliner
echo "add me to fstab"
mount | grep ${host}
nano /etc/fstab

```

```ini
/dev/mapper/vg--host--data-lv--host--data /mnt/host-data ext4 defaults 0 1

```

[![](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2026-03/scaled-1680-/HN2YlPajc6qQVVrC-image-1773573600310.png)](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2026-03/HN2YlPajc6qQVVrC-image-1773573600310.png)

Reload fstab and remount all drives. New drive must be mounted.

```bash
systemctl daemon-reload
mount -a

```

[![](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2026-03/scaled-1680-/dBBV8xfYSmKPtOUe-image-1773573651659.png)](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2026-03/dBBV8xfYSmKPtOUe-image-1773573651659.png)

When possible, test VM restart to ensure disk will be mounted

```
shutdown -r now

```

# LVM - extend the volume

To extend the volume, we have to attach new physical (virtual) disk on hypervisor.

## Preparations

We need to understand which disk need to be extended: In my case it is a root which is full.

```bash
df -h

```

[![](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2025-07/scaled-1680-/bsf3mBhiQiJ1TY7B-image-1753066918364.png)](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2025-07/bsf3mBhiQiJ1TY7B-image-1753066918364.png)

and verify it is LVM

```bash
lsblk

```

[![](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2025-07/scaled-1680-/nUynxOxAeW14fYNp-image-1753067074668.png)](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2025-07/nUynxOxAeW14fYNp-image-1753067074668.png)

After adding the disk, it should appear in the list:

```bash
lsblk

```

[![](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2025-07/scaled-1680-/QPac3deDK85EUCgl-image-1753066783696.png)](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2025-07/QPac3deDK85EUCgl-image-1753066783696.png)

Let's note which file system in use: 'XFS' or 'ext4', will be needed later..

```bash
lsblk -f

```

[![](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2025-07/scaled-1680-/PxDNlEAdU2fhI95P-image-1753067304566.png)](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2025-07/PxDNlEAdU2fhI95P-image-1753067304566.png)

We are good to go, prepare for activities. Replace your values for variables `dev` and `fs` with correct ones. Subtract `0.1` from partition size:

```bash
export dev="/dev/sdc"

```

Disk need to be marked as a "Physical Volume" to be able to join the "volume group" and list:

```bash
pvcreate ${dev}
pvscan

```

[![](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2025-07/scaled-1680-/hyE3fOSAKf0cZI0u-image-1753068015252.png)](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2025-07/hyE3fOSAKf0cZI0u-image-1753068015252.png)

First, let's define, which 'volume group' need to be extended

```bash
vgscan

```

[![](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2025-07/scaled-1680-/MzJJBqsLFMjrCU12-image-1753068155973.png)](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2025-07/MzJJBqsLFMjrCU12-image-1753068155973.png)

In my case, it is `ol_vbox`. Let's define it:

```bash
export vg="ol_vbox"

```

Let add "new Physical Volume" to the "Virtual Group":

```bash
echo ${vg}
echo ${dev}
vgextend ${vg} ${dev}

```

[![](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2025-07/scaled-1680-/AL0nogwrCgajX6lj-image-1753068416197.png)](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2025-07/AL0nogwrCgajX6lj-image-1753068416197.png)

List Physical Volumes, new disk should be shown

```bash
pvscan

```

[![](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2025-07/scaled-1680-/e8Ust92SIaqzt43V-image-1753068482983.png)](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2025-07/e8Ust92SIaqzt43V-image-1753068482983.png)

Now it is a time to extend Logical Volume. Let's observe current ones first

```bash
lvdisplay

```

[![](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2025-07/scaled-1680-/wcgWYhfgwtHnlKG7-image-1753069224318.png)](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2025-07/wcgWYhfgwtHnlKG7-image-1753069224318.png)

In my case, "Volume Group" is linked to "Logical Volume" has path "/dev/ol\_vbox/root". Set up extension size.

```bash
export lv="/dev/ol_vbox/root"
export size="49.9G"


```

and extend "Logical Volume"

```bash
lvextend -L +${size} ${lv}

```

[![](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2025-07/scaled-1680-/ZeoQqsJgcvDAV360-image-1753069308477.png)](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2025-07/ZeoQqsJgcvDAV360-image-1753069308477.png)

Depending on which file system is in use, note current partition size, expand it and check size again:

```bash
df -h

# for "ext4"
resize2fx ${lv}

# for "xfs"
xfs_growfs ${lv}

df -h

```

[![](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2025-07/scaled-1680-/HYPgPbO0JVZcOrDZ-image-1753069443887.png)](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2025-07/HYPgPbO0JVZcOrDZ-image-1753069443887.png)

Q.E.D.

Storage management using LVM is a logical process, once understood.

# NFS export

Mount it to freshly created directory. Add to fstab for automatic mount during boot process.

## Preparations:

```bash
sudo su
dnf install \
    nfs-utils

```

## Configure

```bash
# vi /etc/idmapd.conf

```

```bash
# [General]
# Verbosity = 0
# Domain = (domain)

```

```bash
# systemctl daemon-reload
# systemctl restart rpcbind

```

## Allow network connectivity:

For Redhat family:

```bash
firewall-cmd --list-all
firewall-cmd --permanent --add-service=nfs
firewall-cmd --reload
firewall-cmd --list-all

```

For Debian family:

```bash
ufw status
ufw allow nfs
ufw enable

```

## Check connecivity

Decide which declaration will be used: FQDN or IP address (depending on situation and purpose).

```bash
export src_host="(source host)"
ping ${src_host}
showmount -e ${src_host}

```

Verify that needed export is listed in the output and set a variable:

```bash
export src_export="(name of exported directory)"

```

When we are mounting shared network drive, concept "one-to-many". The shared storage must be identified in mount table by source host and exported path in the hierarchial order. This will facilitate of adding additional mounts from the same host. Preparations are as follows:

```bash
export dir="/mnt/${src_host}/${src_export}/"
echo ${dir}
read -p "Correct?"
mkdir -p ${dir}
ls -la ${dir}

```

Skip this, if you created directory using previous (one-to-many) concept. Create local directory where a NFS export will be mounted. Naming used here comes from the name of attached disk to the VM (usually). To clarify, that it is a data disk, `-data` suffix is added. This is concept "one-to-one", which mean that the disk will be mounted only on single host.

```bash
export host="$(hostname -s)"
export dir="/mnt/${host}-data/"
echo ${dir}
mkdir -p ${dir}

```

If/when there is a need to use mounted resource on different machines, consider creating symbolic links. This will be useful while configuring the applications' configurations. In my case, I shall mount NFS export for Nextcloud application (ncp1 = NextCloud Production, 1st environment), change it to whatever you need. Do not worry about double forward slashes, shell interpreter will ignore them and shrink to a single slash.

```bash
export symlink="/mnt/ncp1"
ln -s ${dir} ${symlink}
ls -la ${symlink}

```

should end up like this:

```bash
lrwxrwxrwx. 1 root root 21 Apr 29 13:37 /mnt/ncp1 -> /mnt/(src_host)/(src_export)

```

and double-check mountpoint and free disk space

```bash
df -h ${symlink}

```

As we can see, we are still 'on local drive', because nothing is mounted yet. Before mounting the export, check where target directory is mounted, it should be mounted on the root `/` and it should be empty.

```
Filesystem           Size  Used Avail Use% Mounted on
/dev/mapper/ol-root   99G  8.2G   91G   9% /

```

```bash
ls -la ${dir}

```

```bash
total 0
drwxr-xr-x. 2 root root  6 Apr 29 13:56 .
drwxr-xr-x. 3 root root 18 Apr 29 13:56 ..

```

Finally, check variables and mount the export.

```bash
echo ${src_host}
echo ${src_export}
echo ${dir}
read -p "Sure to process?"
mount -t nfs4 -o nfsvers=4 ${src_host}:${src_export} ${dir}
mount | grep nfs

```

Check again

```bash
df -h ${dir}
ls -la ${dir}

```

Mounting should be presented as IP address. Exported path should be mounted in the correct destination. For example:

```bash
df -h ${dir}

```

```
Filesystem                     Size  Used Avail Use% Mounted on
Filesystem                         Size  Used Avail Use% Mounted on
10.x.x.x:/ifs/data/ARCHAZ/NCFS  1.0T     0  1.0T   0% /mnt/10.x.x.x/(src_export)

```

Let's add to new mounting point to`fstab` to automatically mount on the system boot:

```bash
cat /etc/fstab

```

```bash
echo "${src_host}:${src_export} ${dir} nfs4 nfsvers=4,defaults,_netdev,rw,sec=sys 0 0" | tee -a /etc/fstab
tail -n5 /etc/fstab
systemctl daemon-reload

```

Umount and check automatic mounts, simulating restart.

```
umount ${dir}
mount | grep nfs
read -p "Umounted?"
mount -aF
mount | grep nfs
df -h ${dir}

```

## Content

Check for permissions to the destination directory

```bash
namei -mo ${dir}

```

Try to write to the destination

```bash
touch ${dir}/test.md
ls -latr ${dir}

```

When possible, test VM restart to ensure disk will be properly mounted, simulating real restart.

```bash
uptime
shutdown -r now

uptime
mount | grep nfs

```

To troubleshhot, enable RPC NFS logging. Tail the log:

```bash
rpcdebug -m nfsd -s a
tail -f /var/log/messages

```

To rtroubleshoot NFS, increase verbosity:

```bash
cat /proc/sys/sunrpc/nfsd_debug
cat /proc/sys/sunrpc/nfs_debug

echo 10 > /proc/sys/sunrpc/nfs_debug

```

Disable logging

```bash
rpcdebug - nfsd -c

```

In case, RPC bind is not possible, let configure operating system not to check and match user accounts bindinds. Please, acknowledge, that this will effect permissions changes.

```bash
# for Redhat Linux
sysctl -w nfs.nfs4_disable_idmapping=1

# for Oracle Linux
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
nfsidmap -c

```

For permanent

```bash
# for Redhat Linux
echo "nfs.nfs4_disable_idmapping=1" | tee -a /etc/sysctl.d/99-nfs-disable-idmapping.conf
sysctl --system

# for Oracle
echo "options nfs nfs4_disable_idmapping=1" | tee -a /etc/modprobe.d/nfs.conf
dracut -f
shutdown -r now

```

Check permissions are not assigned to `nobody:nobody` anymore

```bash
namei -mo /mnt/ncp/

```

# Moving most consuming directory to the separate disk

In my case, one directory is consuming most of the disk space dedicated for the system.

Stop the service.

```bash
systemctl stop wazuh*
systemctl | grep wazuh

```

Rename directory

```bash
export src="/var/ossec"
mv ${src} ${src}.backup

```

Configure LVM, mount disk to the location [guide](/books/disks-and-partitions/page/lvm), but do not create directory not mount the disk.

```bash
mkdir -p ${src}
export host=$(hostname)
mount /dev/vg-${host}-data/lv-${host}-data ${src}
df -h

```

Add mount point to the fstab. Copy needed line to the buffer and paste it into the /etc/fstab

```bash
mount | grep ${host}

```

```
/dev/mapper/vg--host--data-lv--host--data on /var/ossec type ext4 (rw,relatime)

```

```bash
vi /etc/fstab

```

```
/dev/mapper/vg--host--data-lv--host--data /var/ossec    ext4    errors=remount-ro    0 1

```

```bash
systemctl daemon-reload
mount -a
df -h

```

```bash
Filesystem                                           Size  Used Avail Use% Mounted on
/dev/root                                             29G   20G  8.5G  70% /
tmpfs                                                2.0G     0  2.0G   0% /dev/shm
tmpfs                                                783M 1000K  782M   1% /run
tmpfs                                                5.0M     0  5.0M   0% /run/lock
efivarfs                                              56K   24K   27K  48% /sys/firmware/efi/efivars
/dev/sda16                                           881M   70M  749M   9% /boot
/dev/sda15                                           105M  6.1M   99M   6% /boot/efi
tmpfs                                                392M   12K  392M   1% /run/user/1003
/dev/mapper/vg--host--data-lv--host--data             98G   24K   93G   1% /var/ossec

```

Run the screen (tmux). Move files from backup to new destination

```bash
tmux a
sudo su
apt install rsync
export src="/var/ossec"
# trailing slash matters! telling copy content, nor directory itself.
rsync -avAXH --progress ${src}.backup/ ${src}

```

Verify integrity (output must be empty)

```bash
# checksums, permissions, timestamps
rsync -avcn --delete ${src}.backup ${src}

# content only
diff -qr ${src}.backup ${src}

```

Verify size

```bash
du -s ${src}

```

```
8905080 /var/ossec

```

```bash
du -s ${src}.backup

```

```
8905676 /var/ossec.backup

```

## Perform VM restart to ensure disk will be mounted and service is working properly

```
shutdown -r now

```

Once happy, remove source

```
export src="/var/ossec"
rm -rf ${src}.backup

```