Getting Started with Linux KVM
36 min
preamble preamble this document serves as a technical guide for deploying and validating a high performance virtual network gateway using asternos vpp in a virtualized linux environment it outlines the environment requirements, configuration steps, and key technologies used throughout the process target audience target audience this guide is designed for network engineers system administrators developers …who need to build a high performance network testing or routing platform on top of qemu/kvm with pci passthrough acceleration prerequisites prerequisites to follow this guide effectively, readers should have basic proficiency in the following areas linux fundamentals linux fundamentals comfort with linux command line operations, including editing files and performing routine system administration tasks networking fundamentals networking fundamentals understanding of essential layer 2/layer 3 concepts such as ip addressing, subnet masks, default gateways, routing, and vlan segmentation virtualization concepts virtualization concepts basic knowledge of virtual machines and host guest architectures, ideally with some familiarity using qemu/kvm objective objective this document provides a step by step guide for deploying an asternos vpp virtual machine on an ubuntu host using qemu/kvm along with pci passthrough the end goal is to build and verify a high performance virtual router that supports inter vlan routing nat for internet access …within a virtualized test environment applicable models and versions applicable models and versions hardware hardware host machine thinkcentre m8600t n000 (example reference model) network adapter intel i350 quad port gigabit ethernet cpu requirements processor must support the sse4 instruction set you can verify this using lscpu and checking that sse4 appears in the cpu flags software software host operating system ubuntu linux 24 04 virtualization stack qemu/kvm 8 2 2, libvirt 10 0 0 guest system asternos vpp feature overview feature overview pci passthrough pci passthrough a virtualization feature that assigns a physical hardware device directly to a vm, giving the vm exclusive access and enabling near native performance inter vlan routing inter vlan routing a routing function that enables communication between different network segments by creating virtual layer 3 interfaces for each vlan network address translation (nat) network address translation (nat) allows devices in private subnets to access the public internet by reusing the router’s public ip address typical deployment example dual subnet routing + nat typical deployment example dual subnet routing + nat requirements requirements deploy an asternos vpp virtual router with one dedicated wan port and multiple dedicated physical lan ports via pci passthrough group the lan ports into two vlans, each connecting to a separate pc or subnet ensure both pcs/subnets have internet access via the vm’s nat ensure hosts in both vlans can communicate with each other through inter vlan routing topology topology physical connections host ens3f0 (pci address 02 00 0) > upstream router (wan) host ens3f1 (pci address 02 00 1) > pc1 (lan1) host ens3f2 (pci address 02 00 2) > pc2 (lan2) host ens3f3 (pci address 02 00 3) > pc3 (lan3) environment environment device type model/system role/description host machine thinkcentre m8600t n000 ubuntu, qemu/kvm,libvirt host vm asternos vpp 8gb ram, 4 core cpu,64gb disk pc1 windows pc lan1 client, connected to ens3f1 pc2 windows pc lan2 client, connected to ens3f2 pc3 windows pc lan3 client, connected to ens3f3 network plan interface (asternos ) ipaddress / range description wan ethernet1 192 168 200 178/24 connects to upstream router 192 168 200 1 lan1 vlan100 10 0 1 0/24 subnet for pc1 and pc3, gateway 10 0 1 1 lan2 vlan200 10 0 2 0/24 subnet for pc2, gateway 10 0 2 1 software acquisition software acquisition software download https //docs asternos com/api/files/8442c9f3 ff4d 42e0 9e2c baff1a839a84 the img file provided in this guide is a pre installed virtual disk image (vdi) it contains virtual drivers intended only for use in virtualized environments configuration steps on ubuntu host configuration steps on ubuntu host bios/uefi settings bios/uefi settings objective to enable the iommu function at the firmware level (bios/uefi), making the hardware feature available to the operating system action reboot the host and enter the bios/uefi setup ensure that both intel(r) vt d and intel(r) virtualization technology are enabled grub parameter configuration grub parameter configuration objective to instruct the linux kernel to activate and use the iommu feature that was enabled in the firmware terminal window action edit the grub configuration files /etc/default/grub, find the line starting with grub cmdline linux default and add " intel iommu=on iommu=pt " inside the quotes \#sudo nano /etc/default/grub grub cmdline linux default="quiet splash intel iommu=on iommu=pt" \#update the grub configuration by sudo update grub configure vfio driver configure vfio driver objective to use the dedicated vfio pci driver to take control of the physical nics intended for passthrough this prevents the host os from loading its default drivers, making the nics available to the vm action find the nic’s device id \# this command lists all network devices and their ids lspci nn | grep i ethernet note \[8086 1521] is the device id if your network card is different, replace 8086 1521in the command below with the id you found configure driver binding and blacklist \# tell the system that devices with id 8086 1521 should \# be managed by vfio pci echo "options vfio pci ids=8086 1521" | sudo tee /etc/modprobe d/vfio conf \# prevent ubuntu from loading the default 'igb' driver \# for this nic to avoid conflicts echo "blacklist igb" | sudo tee /etc/modprobe d/blacklist igb conf force early loading of vfio modules edit the /etc/initramfs tools/modules and add the following lines at the end \#/etc/initramfs tools/modules vfio vfio iommu type1 vfio pci vfio virqfd update configuration and reboot sudo update initramfs u sudo reboot verify host configuration verify host configuration after rebooting, run the following command in the host terminal lspci nnk | grep ia3 02 00 expected result thekernel driver in use field for all four nics(from02 00 0to02 00 3) should now show vfio pci launching the virtual machine launching the virtual machine method a manual launch with qemu (for quick tests) this method starts the virtual machine directly with a single command it is simple and convenient, suitable for temporary testing and validation launch the virtual machine run the following qemu command on the host sudo qemu system x86 64 \ enable kvm \ m 8192 \ smp 4 \ cpu host \ drive file=/var/lib/libvirt/images/sonic vpp img,if=virtio,format=qcow2 \\ \# please replace this with the actual path to your image file device vfio pci,host=02 00 0,id=wan nic \ device vfio pci,host=02 00 1,id=lan nic1 \ device vfio pci,host=02 00 2,id=lan nic2 \ device vfio pci,host=02 00 3,id=lan nic3 \ nographic \ serial mon\ stdio interface mapping the order of the device parameters determines the interface names inside the asternos vm for this example qemu device host pci address asternos vm interface planned use host=02 00 0 02 00 0 ethernet1 wan host=02 00 1 02 00 1 ethernet3 lan port (pc2) host=02 00 2 02 00 2 ethernet3 lan port (pc2) host=02 00 3 02 00 3 ethernet4 lan port (pc3) important notice network port order the order of interfaces such as ethernet1, ethernet2, etc , as recognized internally by asternos vpp, is determined by the order of the device parameters in the qemu startup command (i e , the order of pci addresses) this order may not match the physical arrangement of network ports on the back panel of your server chassis (e g , top to bottom, left to right) strong recommendation before proceeding with the next configuration step, connect only one network cable (for example, the wan port), start the virtual machine, and use the show interface status command to identify which ethernet interface changes to the up state this helps you correctly map physical ports to logical ports and avoid configuration failures caused by incorrect cabling method b persistent launch with libvirt (recommended) this method uses libvirt to manage the virtual machine, enabling persistent operation and auto start on boot sudo virt install \\ \ name asternos \\ \ virt type kvm \\ \ memory 8192 \\ \ vcpus 4 \\ \ cpu host passthrough \\ \ disk path=/var/lib/libvirt/images/sonic vpp img,bus=virtio \\ \# please replace this with the actual path to your image file \ import \\ \ os variant debian11 \\ \ network none \\ \ host device 02 00 0 \\ \ host device 02 00 1 \\ \ host device 02 00 2 \\ \ host device 02 00 3 \\ \ nographics create the vm run the following command on the host after executing this command, the virtual machine will be automatically defined and started you will see the boot process and login prompt directly in your current terminal terminal window sudo virsh autostart asternos auto start the virtual machine once the virtual machine has been created successfully, open a new terminal on the host machine and run the following command to set it to start automatically on boot\ terminal window access and configure the asternos vpp vm access and configure the asternos vpp vm regardless of which method you used to start the virtual machine, the subsequent configuration steps are the same access the virtual machine console if you used method a (qemu) , the vm console is already displayed in your current terminal if you used method b (libvirt) , you can connect to the virtual machine console at any time using the following command in the host terminal sudo virsh console asternos log in and enter configuration mode at the login prompt, use the default credentials to access the system username admin password asteros step by step configuration and verification launch the command line interface & enter configuration mode admin\@sonic $ sonic cli configure terminal configure wan interface sonic(config)# interface ethernet 1 sonic(config if 1)# description wan port sonic(config if 1)# ip address 192 168 200 178/24 \# assign this interface to nat zone 1 \# by convention, the outside (wan) interface is a non zero zone, \# and inside interfaces are zone 0 sonic(config if 1)# nat zone 1 sonic(config if 1)# exit configure vlans and gateway interfaces sonic(config)# vlan 100 sonic(config vlan 100)# exit sonic(config)# vlan 200 sonic(config vlan 200)# exit sonic(config)# interface vlan 100 sonic(config vlan if 100)# description lan1 gateway for pc1 and pc3 sonic(config vlan if 100)# ip address 10 0 1 1/24 sonic(config vlan if 100)# exit sonic(config)# interface vlan 200 sonic(config vlan 200)# description lan2 gateway for pc2 sonic(config vlan 200)# ip address 10 0 2 1/24 sonic(config vlan 200)# exit assign physical lan ports to vlans \# connects to pc1 sonic(config)# interface ethernet 2 sonic(config if 2)# description port for pc1 sonic(config if 2)# switchport access vlan 100 sonic(config if 2)# exit \# connects to pc2 sonic(config)# interface ethernet 3 sonic(config if 3)# description port for pc2 sonic(config if 3)# switchport access vlan 200 sonic(config if 3)# exit \# connects to pc3 sonic(config)# interface ethernet 4 sonic(config if 4)# description port for pc3 sonic(config if 4)# switchport access vlan 100 sonic(config if 4)# exit configure routing and nat \# configure the default route to point to the upstream router sonic(config)# ip route 0 0 0 0/0 192 168 200 1 \# enable nat globally sonic(config)# nat enable \# create a nat pool named 'lan pool' \# using the router's public ip sonic(config)# nat pool lan pool 192 168 200 178 \# bind the pool to a policy named 'lan binding' \# to apply nat to all traffic crossing zones sonic(config)# nat binding lan binding lan pool save configuration sonic(config)# write verify configuration please ensure that the admin/oper status of the interface shows up/up sonic# show ip interfaces sonic# show ip route sonic# show vlan summary sonic# show nat config configuration steps client pcs pc1 set ip to 10 0 1 10, subnet mask to 24, gateway to 10 0 1 1, and dns to 8 8 8 8 pc2 set ip to 10 0 2 10, subnet mask to 24, gateway to 10 0 2 1, and dns to 8 8 8 8 pc3 set ip to 10 0 1 11, subnet mask to 24, gateway to 10 0 1 1, and dns to 8 8 8 8 function and performance verification function and performance verification this chapter will comprehensively verify that the virtual router’s core functions and performance metrics meet expectations through a series of tests overall test plan overall test plan we will proceed with the following sequence of tests layer 2 switching performance (intra vlan) use iperf3 to test the transfer rate between pc1 and pc3 to verify switching performance within the same vlan layer 3 routing performance (inter vlan) use iperf3 to test the transfer rate between pc1 and pc2 to verify routing performance between different vlans, monitored with router side commands external connectivity (nat verification) use ping to test if internal pcs can access the public internet, verifying basic nat connectivity layer 2 switching performance test (pc1 < > pc3) layer 2 switching performance test (pc1 < > pc3) objective to verify the layer 2 (l2) data forwarding capability of the virtual router within the same vlan since pc1 and pc3 are both in vlan 100, communication between them is handled by l2 switching procedure on pc1 (10 0 1 10) , open a command prompt and ensure the iperf3 server is running iperf3 s on pc3 (10 0 1 11) , open a command prompt and execute the client test iperf3 c 10 0 1 10 t 30 results analysis the test rate should stabilize around 950 mbits/sec , achieving gigabit line rate layer 3 routing performance test (pc1 < > pc2) layer 3 routing performance test (pc1 < > pc2) objective to verify the layer 3 (l3) routing performance of the virtual router between different vlans communication between pc1 (vlan 100) and pc2 (vlan 200) requires l3 routing procedure on pc1 (10 0 1 10) , open a command prompt and ensure the iperf3 server is running iperf3 s on pc2 (10 0 2 10) , open a command prompt and execute the client test iperf3 c 10 0 1 10 t 30 results analysis the test rate should also achieve line rate performance of around 950 mbits/sec router side verification during the iperf3 test, you can monitor the interface statistics in real time on the asternos device by running show counters interface as seen above, the receive (rx) rate for ethernet3 (connected to pc2) is approximately 1000 mbits/s, which perfectly matches the iperf3 results internet access function test internet access function test objective to verify that the nat function is effective for all internal vlans ping connectivity test on pc1 (vlan 100) , ping 8 8 8 8 you should receive successful replies on pc2 (vlan 200) , ping 8 8 8 8 you should also receive successful replies conclusion conclusion this guide demonstrates that asternos vpp successfully combines the robust sonic ecosystem with the high performance vpp data plane by leveraging virtual machines and pci passthrough on standard x86 servers, users can easily build an enterprise grade virtual gateway capable of line rate layer 2/3 forwarding and nat for network environments seeking high performance, flexibility, and cost efficiency, asternos vpp is an ideal solution
