Setting up jumbo frames in my homelab

September 04, 2025

cover
Source: リン

Since I got the Hasivo F1100W-4SX-4XGT 10G Switch, which is an L3 managed switch, I’ve never really done any configuration on it other than changing its DHCP settings.

Some people were asking about its support for jumbo frames in an STH thread, which at the time I didn’t even know was a thing, but that term somehow stuck with me.

So, today, when I thought I’d check out what exactly I can configure on my switch, the first thing that came to mind was jumbo frames.

Hasivo Switch

Setting up jumbo frames is easy enough on the web interface. You go to Port > Basic Configuration.

1

First, select the port you want to configure by clicking the checkbox to the left, and then set the fields. Note that all fields except “Description” and “DAC” must be set for the “Apply” button to work. If you just want to change the jumbo frame, you should set the other fields as they were.

Conventionally, jumbo frames use a 9000-byte MTU, meaning the IP packet’s maximum size is 9000 bytes. However, considering extra overhead like the Ethernet header (14 bytes), FCS checksum (4), VLAN tags (4), etc., switches must allow for more than 9000 bytes. Many vendors ultimately converged and agreed on 9216 bytes because it would handle most cases, and it’s a multiple of 1024. I’ll use 9216 on my switch too.

Asus Router

I also have an ASUS ROG Rapture GT-AXE16000 WiFi 6E Gaming Router. Setting up jumbo frames on it is super easy. You simply go to LAN > Switch Control and enable Jumbo Frame.

2

Ubuntu NUC

I have an Intel NUC6i5SYH running Ubuntu Server 22. To set up jumbo frames on it, you only need to run this one command. Notice how it changed from mtu 1500 to mtu 9000. MTU is the largest IP packet size that a NIC or interface can send or receive without requiring fragmentation.

rex@rex-nuc:~$ ip link show dev eno1
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether f4:4d:30:66:8e:0b brd ff:ff:ff:ff:ff:ff
    altname enp0s31f6
rex@rex-nuc:~$ sudo ip link set dev eno1 mtu 9000
rex@rex-nuc:~$ ip link show dev eno1
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether f4:4d:30:66:8e:0b brd ff:ff:ff:ff:ff:ff
    altname enp0s31f6

Windows MS-01

Currently, because I’m still testing a lot of things, my Minisforum MS-01 is still only running Windows 11. So I’ll set up jumbo frames on it too. Windows hid this setting pretty deep. You need to go to Control Panel > Network and Internet > Network and Sharing Center > Change adapter settings > double-click your link > Properties > Configure > Advanced > find and click “Jumbo Packet” in the Property list > set the value to “9014 Bytes”. Windows says 9014 because it’s showing 9000 IP packet bytes + 14 Ethernet header bytes, but this is 9000 MTU alright.

3

Asustor NAS

I have an Asustor FS6712X NAS running their stock OS(Asustor Data Master 5). To set up jumbo frames on it, go to Settings > Network > Network interface > click on your link > Configure > Set MTU to 9000.

4

MacBook via Thunderbolt

Sometimes I connect my MacBook Pro to my switch via the OWC Thunderbolt 3 10G Ethernet Adapter, so I’ll set that up too. On macOS Sequoia, go to System Settings > Network > click on the “Details…” button of your link > Hardware > set MTU to Jumbo (9000).

5

Testing

The no-frag behavior of ping is a bit off on both Windows and macOS, so I’ll test only from Linux with my Ubuntu NUC. Since the links are two-way, this should suffice.

ping uses the ICMP protocol, which has an 8-byte header, adding on top of that a 20-byte IP header, that’s a total 28-byte header. So, we should set the actual ICMP payload size to 9000-28=8972 bytes. In addition, -M do sets the don’t fragment (DF) flag in IPv4.

rex@rex-nuc:~$ ping -M do -s 8972 -c 3 ms-01.lan
PING ms-01.lan (192.168.50.100) 8972(9000) bytes of data.
8980 bytes from ms-01 (192.168.50.100): icmp_seq=1 ttl=128 time=3.38 ms
8980 bytes from ms-01 (192.168.50.100): icmp_seq=2 ttl=128 time=1.80 ms
8980 bytes from ms-01 (192.168.50.100): icmp_seq=3 ttl=128 time=1.32 ms

--- ms-01.lan ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.317/2.165/3.380/0.881 ms
rex@rex-nuc:~$ ping -M do -s 8972 -c 3 flashstor.lan
PING flashstor.lan (192.168.50.95) 8972(9000) bytes of data.
8980 bytes from FS6712X-251D (192.168.50.95): icmp_seq=1 ttl=64 time=1.04 ms
8980 bytes from FS6712X-251D (192.168.50.95): icmp_seq=2 ttl=64 time=0.502 ms
8980 bytes from FS6712X-251D (192.168.50.95): icmp_seq=3 ttl=64 time=0.533 ms

--- flashstor.lan ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 0.502/0.692/1.042/0.247 ms
rex@rex-nuc:~$ ping -M do -s 8972 -c 3 mbp.lan
PING mbp.lan (192.168.50.240) 8972(9000) bytes of data.
8980 bytes from MacBook-Pro (192.168.50.240): icmp_seq=1 ttl=64 time=0.501 ms
8980 bytes from MacBook-Pro (192.168.50.240): icmp_seq=2 ttl=64 time=0.651 ms
8980 bytes from MacBook-Pro (192.168.50.240): icmp_seq=3 ttl=64 time=0.640 ms

--- mbp.lan ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2024ms
rtt min/avg/max/mdev = 0.501/0.597/0.651/0.068 ms

Success! We can confirm this is indeed 9000 MTU by adding one more byte to the payload.

rex@rex-nuc:~$ ping -M do -s 8973 -c 3 ms-01.lan
PING ms-01.lan (192.168.50.100) 8973(9001) bytes of data.
ping: local error: message too long, mtu=9000
ping: local error: message too long, mtu=9000
ping: local error: message too long, mtu=9000

--- ms-01.lan ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2051ms

rex@rex-nuc:~$ ping -M do -s 8973 -c 3 flashstor.lan
PING flashstor.lan (192.168.50.95) 8973(9001) bytes of data.
ping: local error: message too long, mtu=9000
ping: local error: message too long, mtu=9000
ping: local error: message too long, mtu=9000

--- flashstor.lan ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2054ms

rex@rex-nuc:~$ ping -M do -s 8973 -c 3 mbp.lan
PING mbp.lan (192.168.50.240) 8973(9001) bytes of data.
ping: local error: message too long, mtu=9000
ping: local error: message too long, mtu=9000
ping: local error: message too long, mtu=9000

--- mbp.lan ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2044ms

Now we can test the actual performance gain from using jumbo frames. I set up a iperf3 server on my MS-01 and started testing from my NUC:

rex@rex-nuc:~$ sudo ip link set dev eno1 mtu 1500
rex@rex-nuc:~$ iperf3 -t 5 -c ms-01.lan
Connecting to host ms-01.lan, port 5201
[  5] local 192.168.50.24 port 58958 connected to 192.168.50.100 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   115 MBytes   962 Mbits/sec    0    270 KBytes
[  5]   1.00-2.00   sec   111 MBytes   933 Mbits/sec    0    270 KBytes
[  5]   2.00-3.00   sec   111 MBytes   933 Mbits/sec    0    270 KBytes
[  5]   3.00-4.00   sec   111 MBytes   933 Mbits/sec    0    270 KBytes
[  5]   4.00-5.00   sec   111 MBytes   933 Mbits/sec    0    270 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-5.00   sec   560 MBytes   939 Mbits/sec    0             sender
[  5]   0.00-5.00   sec   557 MBytes   934 Mbits/sec                  receiver

iperf Done.
rex@rex-nuc:~$ sudo ip link set dev eno1 mtu 9000
rex@rex-nuc:~$ iperf3 -t 5 -c ms-01.lan
Connecting to host ms-01.lan, port 5201
[  5] local 192.168.50.24 port 55346 connected to 192.168.50.100 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   109 MBytes   915 Mbits/sec    0    288 KBytes
[  5]   1.00-2.00   sec   108 MBytes   909 Mbits/sec    0    288 KBytes
[  5]   2.00-3.00   sec   108 MBytes   909 Mbits/sec    0    288 KBytes
[  5]   3.00-4.00   sec   108 MBytes   908 Mbits/sec    0    288 KBytes
[  5]   4.00-5.00   sec   108 MBytes   909 Mbits/sec    0    288 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-5.00   sec   542 MBytes   910 Mbits/sec    0             sender
[  5]   0.00-5.00   sec   542 MBytes   908 Mbits/sec                  receiver

iperf Done.

The results show that using jumbo frames on my NUC actually results in worse performance than using normal frames. This is because my NUC is connected to my router with a 1 GbE link. Let me quote ChatGPT for the explanation:

On 1 GbE, jumbo frames don’t provide meaningful gains because the link itself is the limiting factor. While increasing MTU from 1500 to 9000 reduces packet rate from ~81k/sec to ~13k/sec, modern CPUs and NICs can already process 81k/sec effortlessly, so throughput and latency remain unchanged. The result is more configuration complexity without any practical performance benefit.

Let’s test from my Asustor NAS:

rex@FS6712X-251D:/volume1/home/rex/iperf3 $ ip link show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 78:72:64:41:25:1d brd ff:ff:ff:ff:ff:ff
rex@FS6712X-251D:/volume1/home/rex/iperf3 $ ./iperf3 -t 5 -c ms-01.lan
Connecting to host ms-01.lan, port 5201
[  4] local 192.168.50.95 port 34388 connected to 192.168.50.100 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   826 MBytes  6.93 Gbits/sec   86   1.56 MBytes
[  4]   1.00-2.00   sec   865 MBytes  7.26 Gbits/sec    1   1.55 MBytes
[  4]   2.00-3.00   sec   866 MBytes  7.26 Gbits/sec   23   1.55 MBytes
[  4]   3.00-4.00   sec   874 MBytes  7.33 Gbits/sec    3   1.54 MBytes
[  4]   4.00-5.00   sec   865 MBytes  7.26 Gbits/sec   34   1.55 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-5.00   sec  4.20 GBytes  7.21 Gbits/sec  147             sender
[  4]   0.00-5.00   sec  4.19 GBytes  7.20 Gbits/sec                  receiver

iperf Done.
rex@FS6712X-251D:/volume1/home/rex/iperf3 $ ip link show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 78:72:64:41:25:1d brd ff:ff:ff:ff:ff:ff
rex@FS6712X-251D:/volume1/home/rex/iperf3 $ ./iperf3 -t 5 -c ms-01.lan
Connecting to host ms-01.lan, port 5201
[  4] local 192.168.50.95 port 58064 connected to 192.168.50.100 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  1.14 GBytes  9.81 Gbits/sec  145   1.52 MBytes
[  4]   1.00-2.00   sec  1.15 GBytes  9.90 Gbits/sec    2   1.54 MBytes
[  4]   2.00-3.00   sec  1.14 GBytes  9.80 Gbits/sec   41   1.12 MBytes
[  4]   3.00-4.00   sec  1.13 GBytes  9.69 Gbits/sec   59   1.51 MBytes
[  4]   4.00-5.00   sec  1.15 GBytes  9.86 Gbits/sec   43   1.18 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-5.00   sec  5.71 GBytes  9.81 Gbits/sec  290             sender
[  4]   0.00-5.00   sec  5.71 GBytes  9.81 Gbits/sec                  receiver

iperf Done.

Now with a 10 GbE link, we can see a visible performance gain.

Let’s test from my MacBook Pro via Thunderbolt adapter too:

~/Desktop
🌸  ❯ ifconfig en11
en11: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
 options=567<RXCSUM,TXCSUM,VLAN_MTU,TSO4,TSO6,AV,CHANNEL_IO>
 ether 00:23:a4:0d:0b:48
 inet6 fe80::14fb:692c:419e:b733%en11 prefixlen 64 secured scopeid 0x16
 inet6 fd99:1648:431b:f94e:8fc:7cb3:f4e4:391e prefixlen 64 autoconf secured
 inet 192.168.50.240 netmask 0xffffff00 broadcast 192.168.50.255
 nd6 options=201<PERFORMNUD,DAD>
 media: 10Gbase-T <full-duplex>
 status: active

~/Desktop
🌸  ❯ iperf3 -t 5 -c ms-01.lan
Connecting to host ms-01.lan, port 5201
[  7] local 192.168.50.240 port 57802 connected to 192.168.50.100 port 5201
[ ID] Interval           Transfer     Bitrate
[  7]   0.00-1.00   sec   400 MBytes  3.36 Gbits/sec
[  7]   1.00-2.00   sec   628 MBytes  5.27 Gbits/sec
[  7]   2.00-3.00   sec   744 MBytes  6.24 Gbits/sec
[  7]   3.00-4.00   sec   741 MBytes  6.19 Gbits/sec
[  7]   4.00-5.00   sec   748 MBytes  6.30 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  7]   0.00-5.00   sec  3.19 GBytes  5.47 Gbits/sec                  sender
[  7]   0.00-5.00   sec  3.18 GBytes  5.47 Gbits/sec                  receiver

iperf Done.

~/Desktop
🌸  ❯ ifconfig en11
en11: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 9000
 options=567<RXCSUM,TXCSUM,VLAN_MTU,TSO4,TSO6,AV,CHANNEL_IO>
 ether 00:23:a4:0d:0b:48
 inet6 fe80::14fb:692c:419e:b733%en11 prefixlen 64 secured scopeid 0x16
 inet6 fd99:1648:431b:f94e:8fc:7cb3:f4e4:391e prefixlen 64 autoconf secured
 inet 192.168.50.240 netmask 0xffffff00 broadcast 192.168.50.255
 nd6 options=201<PERFORMNUD,DAD>
 media: 10Gbase-T <full-duplex>
 status: active

~/Desktop
🌸  ❯ iperf3 -t 5 -c ms-01.lan
Connecting to host ms-01.lan, port 5201
[  7] local 192.168.50.240 port 57829 connected to 192.168.50.100 port 5201
[ ID] Interval           Transfer     Bitrate
[  7]   0.00-1.00   sec  1.14 GBytes  9.74 Gbits/sec
[  7]   1.00-2.00   sec  1.14 GBytes  9.82 Gbits/sec
[  7]   2.00-3.00   sec  1.14 GBytes  9.82 Gbits/sec
[  7]   3.00-4.00   sec  1.15 GBytes  9.82 Gbits/sec
[  7]   4.00-5.00   sec  1.14 GBytes  9.82 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  7]   0.00-5.00   sec  5.71 GBytes  9.80 Gbits/sec                  sender
[  7]   0.00-5.00   sec  5.70 GBytes  9.80 Gbits/sec                  receiver

iperf Done.

Again, with a 10 GbE link, there is a visible performance gain.

Let me again quote ChatGPT for the explanation:

On 10 GbE, jumbo frames make a clear difference because packet rates get much higher. At 1500 MTU, running at full line rate requires processing ~810k packets per second, which puts a significant load on CPUs, NICs, and interrupts. Switching to 9000 MTU cuts this to ~135k packets per second, drastically reducing overhead. This makes it easier to sustain full 10 GbE speeds and lowers CPU usage.

Conclusion

So, ultimately, it looks like it’s only beneficial to set up jumbo frames on 10 GbE links. I’m gonna roll back my NUC and router to the standard 1500 MTU frame.