Skip to main content

Determine the best server for update

Wherever you are on the planet, there is a need to determine, which server out of many is accessible and which has the lowest latency.

Let's create a mirros list

vi mirrors.list

Mirrors (this one was reverse engineered from official OPNsense page)

https://ftp.cc.uoc.gr/mirrors/opnsense/
https://mirror-opnsense.serverbase.ch/
https://mirror.aarnet.edu.au/pub/opnsense/
https://mirror.ams1.nl.leaseweb.net/opnsense/
https://mirror.catalyst.net.nz/opnsense/
https://mirror.cedia.org.ec/opnsense/
https://mirror.cloudfence.com.br/opnsense/
https://mirror.cs.odu.edu/opnsense/
https://mirror.distly.kr/opnsense
https://mirror.dns-root.de/opnsense/
https://mirror.eliv.digital/opnsense/
https://mirror.fra10.de.leaseweb.net/opnsense/
https://mirror.hemino.net/opnsense/
https://mirror.init7.net/opnsense/
https://mirror.keiminem.com/opnsense/
https://mirror.koreapixel.kr/opnsense/
https://mirror.krfoss.org/opnsense/
https://mirror.leitecastro.com/opnsense/
https://mirror.level66.network/opnsense-dist/
https://mirror.marwan.ma/opnsense/
https://mirror.meowsmp.net/opnsense/
https://mirror.mtl2.ca.leaseweb.net/opnsense/
https://mirror.ntct.edu.tw/opnsense/
https://mirror.ntct.edu.tw/opnsense/
https://mirror.ntct.edu.tw/opnsense/
https://mirror.ntct.edu.tw/opnsense/
https://mirror.pangkin.com/opnsense/
https://mirror.raiolanetworks.com/opnsense/
https://mirror.serverion.com/opnsense/
https://mirror.sfo12.us.leaseweb.net/opnsense/
https://mirror.sfo12.us.leaseweb.net/opnsense/
https://mirror.techlabs.co.kr/opnsense/
https://mirror.ueb.edu.ec/opnsense/
https://mirror.ueb.edu.ec/opnsense/
https://mirror.uvensys.de/opnsense/
https://mirror.venturasystems.tech/opnsense/
https://mirror.verinomi.com/opnsense/
https://mirror.vraphim.com/opnsense/
https://mirror.wdc1.us.leaseweb.net/opnsense/
https://mirror.wdc1.us.leaseweb.net/opnsense/
https://mirror.winsub.kr/opnsense/
https://mirror.zyner.org/mirror/opnsense/
https://mirror.zzunipark.com/opnsense/
https://mirror1.isatisidc.ir/opnsense/
https://mirrors.dotsrc.org/opnsense/
https://mirrors.hopbox.net/opnsense/
https://mirrors.komogoto.com/opnsense/
https://mirrors.nycbug.org/pub/opnsense/
https://mirrors.ocf.berkeley.edu/opnsense/
https://mirrors.pku.edu.cn/opnsense/
https://opnsense-mirror.hiho.ch/
https://opnsense.aivian.org/
https://opnsense.c0urier.net/
https://opnsense.c0urier.net/
https://opnsense.org/download/
https://pkg.opnsense.org/
https://www.mirrorservice.org/sites/opnsense.org/

Once that done, let's create a script to ping all of them

vi url-speed-rank.sh

bash code

#!/usr/bin/env bash
# url-speed-rank.sh
# Combines:
#  - ICMP RTT average per host (ping)
#  - HTTP(S) total time per URL (curl)
# Outputs a sorted table + summary with the fastest N URLs (default 3).

set -o pipefail

VERBOSE=0
TOPN=3

# Ping knobs
PING_COUNT=3
PING_TIMEOUT=1   # seconds (best-effort portable)

# Curl knobs
CURL_TIMEOUT=5  # seconds

usage() {
  cat >&2 <<EOF
Usage: $0 [-v] [-n TOPN] [-c PING_COUNT] [-p PING_TIMEOUT] [-t CURL_TIMEOUT] <urls_file>

  -v              verbose progress (stderr)
  -n TOPN         how many fastest URLs to summarize (default: ${TOPN})
  -c PING_COUNT   ping packets per host (default: ${PING_COUNT})
  -p PING_TIMEOUT ping timeout per packet, seconds (default: ${PING_TIMEOUT})
  -t CURL_TIMEOUT curl max time per URL, seconds (default: ${CURL_TIMEOUT})

Input file:
  - one URL/host per line
  - empty lines and lines starting with # are ignored
  - if scheme missing, https:// is assumed
EOF
  exit 1
}

log() { [[ "$VERBOSE" -eq 1 ]] && echo "$*" >&2; }

while getopts ":vn:c:p:t:" opt; do
  case "$opt" in
    v) VERBOSE=1 ;;
    n) TOPN="$OPTARG" ;;
    c) PING_COUNT="$OPTARG" ;;
    p) PING_TIMEOUT="$OPTARG" ;;
    t) CURL_TIMEOUT="$OPTARG" ;;
    *) usage ;;
  esac
done
shift $((OPTIND - 1))

INPUT="$1"
[[ -z "$INPUT" ]] && usage
[[ ! -r "$INPUT" ]] && { echo "Error: cannot read file '$INPUT'" >&2; exit 1; }

tmp_urls="$(mktemp)"
tmp_hosts="$(mktemp)"
tmp_ping="$(mktemp)"
tmp_results="$(mktemp)"

cleanup() { rm -f "$tmp_urls" "$tmp_hosts" "$tmp_ping" "$tmp_results"; }
trap cleanup EXIT

# 1) Normalize input → list of URLs (deduped)
#    - skip empty/comments
#    - prepend https:// if missing scheme
awk '
  /^[[:space:]]*$/ { next }
  /^[[:space:]]*#/ { next }
  {
    gsub(/^[[:space:]]+|[[:space:]]+$/, "", $0)
    url=$0
    if (url !~ /^https?:\/\//) url="https://" url
    if (!seen[url]++) print url
  }
' "$INPUT" > "$tmp_urls"

TOTAL_URLS="$(wc -l < "$tmp_urls" | tr -d ' ')"
[[ "$TOTAL_URLS" -eq 0 ]] && { echo "No URLs to test (after filtering comments/empty lines)." >&2; exit 1; }

# 2) Extract unique hosts from URLs
#    host = strip scheme, then cut at first /, :, ?, #
sed -E 's|^[a-zA-Z]+://||; s|[/?#].*$||; s|:.*$||' "$tmp_urls" | sort -u > "$tmp_hosts"

# 3) Ping each host once → avg RTT (ms) or 999999 on failure
ping_avg_ms() {
  local host="$1"
  local out

  # Try Linux-style first, then alternate (some environments differ).
  if out="$(ping -c "$PING_COUNT" -W "$PING_TIMEOUT" "$host" 2>/dev/null)"; then
    :
  elif out="$(ping -c "$PING_COUNT" -t "$PING_TIMEOUT" "$host" 2>/dev/null)"; then
    :
  else
    echo "999999"
    return 0
  fi

  # Linux: "rtt min/avg/max/mdev = 10.1/11.2/..."
  # macOS: "round-trip min/avg/max/stddev = 10.1/11.2/..."
  awk -F'/' '/round-trip|rtt/ {print $5; found=1} END{ if(!found) print "999999" }' <<<"$out"
}

log "[*] Pinging hosts ($(wc -l < "$tmp_hosts" | tr -d ' ') unique) ..."
while IFS= read -r host; do
  [[ -z "$host" ]] && continue
  log "    ping: $host"
  avg="$(ping_avg_ms "$host")"
  printf "%s\t%s\n" "$host" "$avg" >> "$tmp_ping"
done < "$tmp_hosts"

# 4) Test each URL with curl → time_total + http_code; join ping by host
log "[*] Curling URLs (${TOTAL_URLS}) ..."
while IFS= read -r url; do
  log "    curl: $url"

  host="$(sed -E 's|^[a-zA-Z]+://||; s|[/?#].*$||; s|:.*$||' <<<"$url")"
  ping_ms="$(awk -F'\t' -v h="$host" '$1==h{print $2; found=1} END{if(!found) print "999999"}' "$tmp_ping")"

  # We capture both time_total and http_code. On hard error, curl prints nothing; treat as failure.
  curl_out="$(curl \
    --silent \
    --output /dev/null \
    --location \
    --max-time "$CURL_TIMEOUT" \
    --write-out "%{time_total}\t%{http_code}" \
    "$url" 2>/dev/null || true)"

  curl_time="$(awk -F'\t' '{print $1}' <<<"$curl_out")"
  http_code="$(awk -F'\t' '{print $2}' <<<"$curl_out")"

  if [[ -z "$curl_time" || "$curl_time" == "0.000"* ]]; then
    curl_time="999.999"
    http_code="${http_code:-000}"
    status="FAIL"
  else
    # Consider 2xx/3xx as OK
    if [[ "$http_code" =~ ^2|^3 ]]; then
      status="OK"
    else
      status="HTTP_$http_code"
    fi
  fi

  # Sort key first (curl seconds), then printable columns
  # Columns: CURL_S  PING_MS  HTTP  STATUS  URL
  printf "%08.3f\t%10.2f\t%3s\t%-8s\t%s\n" \
    "$curl_time" "$ping_ms" "$http_code" "$status" "$url" >> "$tmp_results"
done < "$tmp_urls"

# 5) Print ranked table
echo
echo "Ranked (fastest → slowest) by HTTP total time:"
printf "%-10s %-10s %-4s %-8s %s\n" "CURL_S" "PING_MS" "HTTP" "STATUS" "URL"
printf "%-10s %-10s %-4s %-8s %s\n" "------" "-------" "----" "------" "---"
sort -n "$tmp_results" | awk -F'\t' '{printf "%-10s %-10s %-4s %-8s %s\n",$1,$2,$3,$4,$5}'

# 6) Summary stats + fastest TOPN URLs
echo
echo "Summary:"
unique_hosts="$(wc -l < "$tmp_hosts" | tr -d ' ')"
ping_fail_hosts="$(awk -F'\t' '$2>=999999 {c++} END{print c+0}' "$tmp_ping")"
curl_fail_urls="$(awk -F'\t' '$1>=999.999 || $4=="FAIL" {c++} END{print c+0}' "$tmp_results")"
ok_urls="$(awk -F'\t' '$4=="OK" {c++} END{print c+0}' "$tmp_results")"

# Average curl time across successful URLs (status OK)
avg_ok_curl="$(awk -F'\t' '$4=="OK"{sum+=$1; n++} END{ if(n) printf "%.3f", sum/n; else print "n/a" }' "$tmp_results")"

echo "  URLs tested:           $TOTAL_URLS"
echo "  Unique hosts:          $unique_hosts"
echo "  OK URLs (2xx/3xx):     $ok_urls"
echo "  Curl failures/timeout: $curl_fail_urls"
echo "  Ping failures:         $ping_fail_hosts"
echo "  Avg curl time (OK):    ${avg_ok_curl}s"

echo
echo "Fastest ${TOPN} URL(s) to use for update fetching (by HTTP total time):"
sort -n "$tmp_results" | head -n "$TOPN" | awk -F'\t' '{printf "  %s  (curl=%ss, ping=%sms, http=%s, %s)\n",$5,$1,$2,$3,$4}'

Make executale

chmod +x ./url-speed-rank.sh

And run it

./url-speed-rank.sh -v mirrors.list

Results:

![](https://storage.googleapis.com/iau-data-dox/uploads/images/gallery/2026-04/scaled-1680-/s pS8yKNK62YGtaw5z-image-1775619083551.png)

Let's see result more closer

First column indicates what time it took to load the webpage, There are hosts with 999999 ms of ping , which simply means ICMP is not permitted. Hosts with codes 404 are empty or incorrect.

Short summary in the end gives an idea, which mirrors to focus on

Same script can be used to determine any other mirrors' list reachability.

Let's perform manual verification

curl https://mirror.verinomi.com/opnsense/

Page is accessible, good.

On the day of writing (2026-04-08), current major version is 26 running on FreeBSD v14

Let's check that mirror will have recent upgrades available on that paricular mirror

This tells me, that packages are two weeks old, which is good.

Some servers are located on CDN and protected, might be challenge to reach them or opposite (that depends on the region).

To configure selected server, navigate to System, Firmware, Settings, Mirror, choose (custom) from dropdown, type the server URL

Back to Status page will indicate that repository mirror is successfully configured

Should see dialog with fetching the packages and back to the Status, timestamp for latest update should be changed

Once you see newer base is available and packages will be upgraded, it is a good sign

Upgrade itself

System config has an option to restart server, once upgraded, once that is enable, it will restart

After reboot.

Nothing to update. Good.