Disks and partitions
- LVM - Creation of first physical volume, volume group and logical group.
- LVM - extend the volume
- NFS export
- Moving most consuming directory to the separate disk
LVM - Creation of first physical volume, volume group and logical group.
Create new physical volume, volume group and logical group.
Mount it to freshly created directory.
Add to fstab for automatic mount during boot process.
apt install \
lvm2
dnf install
lvm2
Define variable
hostname
export host="$(hostname)"
lsblk
export dev="/dev/sdc"
export size="9.9G"
Understand which is filesystem type mainly used on the system
df -hT
In my case it is 'ext4', I shall continue in similarity to this. Redhat family distros might want to use 'xfs'
export fs="ext4"
# export fs="xfs"
pvcreate ${dev}
vgcreate vg-${host}-data ${dev}
lvcreate -L ${size} -n lv-${host}-data vg-${host}-data
lvscan
mkfs.${fs} /dev/vg-${host}-data/lv-${host}-data
mkdir -p /mnt/${host}-data
mount /dev/vg-${host}-data/lv-${host}-data /mnt/${host}-data
df -h
#TODO: add into /fstab with oneliner
echo "add me to fstab"
mount | grep ${host}
nano /etc/fstab
/dev/mapper/vg--host--data-lv--host--data /mnt/host-data ext4 defaults 0 1
Reload fstab and remount all drives. New drive must be mounted.
systemctl daemon-reload
mount -a
When possible, test VM restart to ensure disk will be mounted
shutdown -r now
LVM - extend the volume
To extend the volume, we have to attach new physical (virtual) disk on hypervisor.
Preparations
We need to understand which disk need to be extended: In my case it is a root which is full.
df -h
and verify it is LVM
lsblk
After adding the disk, it should appear in the list:
lsblk
Let's note which file system in use: 'XFS' or 'ext4', will be needed later..
lsblk -f
We are good to go, prepare for activities. Replace your values for variables dev and fs with correct ones. Subtract 0.1 from partition size:
export dev="/dev/sdc"
Disk need to be marked as a "Physical Volume" to be able to join the "volume group" and list:
pvcreate ${dev}
pvscan
First, let's define, which 'volume group' need to be extended
vgscan
In my case, it is ol_vbox. Let's define it:
export vg="ol_vbox"
Let add "new Physical Volume" to the "Virtual Group":
echo ${vg}
echo ${dev}
vgextend ${vg} ${dev}
List Physical Volumes, new disk should be shown
pvscan
Now it is a time to extend Logical Volume. Let's observe current ones first
lvdisplay
In my case, "Volume Group" is linked to "Logical Volume" has path "/dev/ol_vbox/root". Set up extension size.
export lv="/dev/ol_vbox/root"
export size="49.9G"
and extend "Logical Volume"
lvextend -L +${size} ${lv}
Depending on which file system is in use, note current partition size, expand it and check size again:
df -h
# for "ext4"
resize2fx ${lv}
# for "xfs"
xfs_growfs ${lv}
df -h
Q.E.D.
Storage management using LVM is a logical process, once understood.
NFS export
Mount it to freshly created directory. Add to fstab for automatic mount during boot process.
Preparations:
sudo su
dnf install \
nfs-utils
Configure
# vi /etc/idmapd.conf
# [General]
# Verbosity = 0
# Domain = (domain)
# systemctl daemon-reload
# systemctl restart rpcbind
Allow network connectivity:
For Redhat family:
firewall-cmd --list-all
firewall-cmd --permanent --add-service=nfs
firewall-cmd --reload
firewall-cmd --list-all
For Debian family:
ufw status
ufw allow nfs
ufw enable
Check connecivity
Decide which declaration will be used: FQDN or IP address (depending on situation and purpose).
export src_host="(source host)"
ping ${src_host}
showmount -e ${src_host}
Verify that needed export is listed in the output and set a variable:
export src_export="(name of exported directory)"
When we are mounting shared network drive, concept "one-to-many". The shared storage must be identified in mount table by source host and exported path in the hierarchial order. This will facilitate of adding additional mounts from the same host. Preparations are as follows:
export dir="/mnt/${src_host}/${src_export}/"
echo ${dir}
read -p "Correct?"
mkdir -p ${dir}
ls -la ${dir}
Skip this, if you created directory using previous (one-to-many) concept.
Create local directory where a NFS export will be mounted. Naming used here comes from the name of attached disk to the VM (usually). To clarify, that it is a data disk, -data suffix is added. This is concept "one-to-one", which mean that the disk will be mounted only on single host.
export host="$(hostname -s)"
export dir="/mnt/${host}-data/"
echo ${dir}
mkdir -p ${dir}
If/when there is a need to use mounted resource on different machines, consider creating symbolic links. This will be useful while configuring the applications' configurations. In my case, I shall mount NFS export for Nextcloud application (ncp1 = NextCloud Production, 1st environment), change it to whatever you need. Do not worry about double forward slashes, shell interpreter will ignore them and shrink to a single slash.
export symlink="/mnt/ncp1"
ln -s ${dir} ${symlink}
ls -la ${symlink}
should end up like this:
lrwxrwxrwx. 1 root root 21 Apr 29 13:37 /mnt/ncp1 -> /mnt/(src_host)/(src_export)
and double-check mountpoint and free disk space
df -h ${symlink}
As we can see, we are still 'on local drive', because nothing is mounted yet.
Before mounting the export, check where target directory is mounted, it should be mounted on the root / and it should be empty.
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ol-root 99G 8.2G 91G 9% /
ls -la ${dir}
total 0
drwxr-xr-x. 2 root root 6 Apr 29 13:56 .
drwxr-xr-x. 3 root root 18 Apr 29 13:56 ..
Finally, check variables and mount the export.
echo ${src_host}
echo ${src_export}
echo ${dir}
read -p "Sure to process?"
mount -t nfs4 -o nfsvers=4 ${src_host}:${src_export} ${dir}
mount | grep nfs
Check again
df -h ${dir}
ls -la ${dir}
Mounting should be presented as IP address. Exported path should be mounted in the correct destination. For example:
df -h ${dir}
Filesystem Size Used Avail Use% Mounted on
Filesystem Size Used Avail Use% Mounted on
10.x.x.x:/ifs/data/ARCHAZ/NCFS 1.0T 0 1.0T 0% /mnt/10.x.x.x/(src_export)
Let's add to new mounting point tofstab to automatically mount on the system boot:
cat /etc/fstab
echo "${src_host}:${src_export} ${dir} nfs4 nfsvers=4,defaults,_netdev,rw,sec=sys 0 0" | tee -a /etc/fstab
tail -n5 /etc/fstab
systemctl daemon-reload
Umount and check automatic mounts, simulating restart.
umount ${dir}
mount | grep nfs
read -p "Umounted?"
mount -aF
mount | grep nfs
df -h ${dir}
Content
Check for permissions to the destination directory
namei -mo ${dir}
Try to write to the destination
touch ${dir}/test.md
ls -latr ${dir}
When possible, test VM restart to ensure disk will be properly mounted, simulating real restart.
uptime
shutdown -r now
uptime
mount | grep nfs
To troubleshhot, enable RPC NFS logging. Tail the log:
rpcdebug -m nfsd -s a
tail -f /var/log/messages
To rtroubleshoot NFS, increase verbosity:
cat /proc/sys/sunrpc/nfsd_debug
cat /proc/sys/sunrpc/nfs_debug
echo 10 > /proc/sys/sunrpc/nfs_debug
Disable logging
rpcdebug - nfsd -c
In case, RPC bind is not possible, let configure operating system not to check and match user accounts bindinds. Please, acknowledge, that this will effect permissions changes.
# for Redhat Linux
sysctl -w nfs.nfs4_disable_idmapping=1
# for Oracle Linux
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
nfsidmap -c
For permanent
# for Redhat Linux
echo "nfs.nfs4_disable_idmapping=1" | tee -a /etc/sysctl.d/99-nfs-disable-idmapping.conf
sysctl --system
# for Oracle
echo "options nfs nfs4_disable_idmapping=1" | tee -a /etc/modprobe.d/nfs.conf
dracut -f
shutdown -r now
Check permissions are not assigned to nobody:nobody anymore
namei -mo /mnt/ncp/
Moving most consuming directory to the separate disk
In my case, one directory is consuming most of the disk space dedicated for the system.
Stop the service.
systemctl stop wazuh*
systemctl | grep wazuh
Rename directory
export src="/var/ossec"
mv ${src} ${src}.backup
Configure LVM, mount disk to the location guide, but do not create directory not mount the disk.
mkdir -p ${src}
export host=$(hostname)
mount /dev/vg-${host}-data/lv-${host}-data ${src}
df -h
Add mount point to the fstab. Copy needed line to the buffer and paste it into the /etc/fstab
mount | grep ${host}
/dev/mapper/vg--host--data-lv--host--data on /var/ossec type ext4 (rw,relatime)
vi /etc/fstab
/dev/mapper/vg--host--data-lv--host--data /var/ossec ext4 errors=remount-ro 0 1
systemctl daemon-reload
mount -a
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 29G 20G 8.5G 70% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 783M 1000K 782M 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
efivarfs 56K 24K 27K 48% /sys/firmware/efi/efivars
/dev/sda16 881M 70M 749M 9% /boot
/dev/sda15 105M 6.1M 99M 6% /boot/efi
tmpfs 392M 12K 392M 1% /run/user/1003
/dev/mapper/vg--host--data-lv--host--data 98G 24K 93G 1% /var/ossec
Run the screen (tmux). Move files from backup to new destination
tmux a
sudo su
apt install rsync
export src="/var/ossec"
# trailing slash matters! telling copy content, nor directory itself.
rsync -avAXH --progress ${src}.backup/ ${src}
Verify integrity (output must be empty)
# checksums, permissions, timestamps
rsync -avcn --delete ${src}.backup ${src}
# content only
diff -qr ${src}.backup ${src}
Verify size
du -s ${src}
8905080 /var/ossec
du -s ${src}.backup
8905676 /var/ossec.backup
Perform VM restart to ensure disk will be mounted and service is working properly
shutdown -r now
Once happy, remove source
export src="/var/ossec"
rm -rf ${src}.backup