Configuration Guide
Large/Mid-Scale Campus Usage Guide
27 min
scenario overview scenario overview this solution is designed for medium to large scale campus networks, adopting a spine aggregation leaf fully layered three tier network architecture it leverages a controller for automated management and intelligent operations the network is divided into a default access zone and a server zone, utilizing key technologies such as distributed gateways and mc lag to provide high performance and highly reliable network connectivity for large scale terminal access and high availability server clusters centrally intelligent management core centrally intelligent management core the controller automatically translates business intents into device configurations through a graphical interface and deploys them accurately, completely eliminating the traditional tedious process of configuring devices one by one via command line interface it offers full lifecycle management, including device onboarding, monitoring, and diagnostics, enabling network automation and high reliability elastic and reliable network backbone elastic and reliable network backbone ultimate scalability horizontal expansion at the aggregation layer supports the integration of large scale leaf switches, effortlessly accommodating future network growth flexible gateway deployment distributed service gateways can be deployed at the leaf layer, confining service traffic to the access layer this significantly enhances forwarding efficiency and network reliability while effectively reducing the burden on upper layer devices scenario optimized access technologies scenario optimized access technologies leaf distributed gateways serve wired and wireless terminals in the default access zone the gateway is lowered to the access leaf, cross subnet communication for terminals no longer needs to traverse the core layer, significantly reducing latency and enabling fault domain isolation leaf mc lag provides high availability layer 2 access for the server zone two leaf switches are virtualized into a single logical device via mc lag to connect with servers, achieving link level and device level load balancing and seamless failover this eliminates loops while ensuring uninterrupted continuity for critical business operations scheme design scheme design this medium to large scale campus employs a full three tier spine aggregation leaf network architecture, divided into a default access zone and a server zone default access zone default access zone leaf1 and leaf2 function as distributed gateways, with leaf1 dedicated to connecting aps and wireless terminals, and leaf2 managing wired terminals horizontal scaling at the aggregation layer (agg1, agg2) ensures high availability and load balancing mc lag operates between the spine and aggregation devices to guarantee link reliability server zone server zone leaf3 and leaf4 connect to servers in pure layer 2 mode via mc lag technology, with gateways centrally deployed on the spine devices this simplifies network management in the server zone and enhances forwarding efficiency management network management network spine devices and leaf devices (leaf1, leaf2) in the default service zone are layer 3 devices, using loopback addresses as their in band management addresses aggregation layer devices and leaf devices (leaf3, leaf4) in the server zone are layer 2 devices in band management addresses are assigned to each layer 2 device from the address range provided in the basic network configuration, with all gateways deployed on spine1 dhcp deployment dhcp deployment dhcp servers are deployed on both spine devices and are automatically configured as a dhcp failover pair via the controller, ensuring high availability of address services controller deployment controller deployment the controller is cloud based and enables centralized policy deployment, configuration management, and status monitoring for all network devices through a graphical interface, significantly improving operational efficiency foundation link data planning foundation link data planning device interface ip address spine1 ethernet53 172 22 244 10/24 loopback0 172 22 252 51/32 spine2 ethernet53 172 22 244 11/24 loopback0 172 22 252 52/32 leaf1 loopback0 172 22 252 53/32 leaf2 loopback0 172 22 252 54/32 service network data planning service network data planning service type ip segment gateway service vlan ssid wireless service 180 10 0 0/24 180 10 0 1/24 1080 new ssid wired service 180 10 1 0/24 180 10 1 1/24 1081 ap management 180 10 2 0/24 180 10 2 1/24 1082 server zone service 180 10 15 0/24 180 10 15 1/24 1501 agg management 172 22 200 0/24 172 22 200 1/24 server zone leaf management 172 22 201 0/24 172 22 201 1/24 device import device import administrators can create or import devices in bulk to specified sites/organizations when an added inventory device connects to the controller and comes online, the controller will automatically assign it to the designated organization/site based on its mac address add devices one by one add devices one by one click \[configuration] \[inventory information] \[+] to create an inventory device fill in the relevant information as prompted on the page import via excel import via excel click \[upload devices] click \[download template] and enter the information for the devices to be added to the inventory according to the template's specifications mac the device's mac address this information is typically found on the device's label device type the device model name the device hostname by default, it is the device's mac address configtag after an ap connects to the controller, it will automatically pull the configuration file corresponding to this tag by default, the tag value is default firmwaretag when performing firmware upgrades, devices requiring an upgrade can be filtered based on their firmware tag type by default, the tag value is default loopback the device's loopback address for all devices operating at layer 3, this address serves as the device's in band management address aclscaleprofile optional values are default or large scale by default, the value is default license the ap's license file for bulk imports, you can either enter the json formatted license file content directly in the excel sheet, or add all devices to inventory first and then import the license files in bulk afterward description descriptive information about the device click \[choose file] to upload the completed template, then click \[test upload data] the controller will automatically check if the uploaded data complies with the specifications and display the results in the test report once completed, users can view the created devices in the \[inventory information] view service configuration service configuration click \[design topology] to enter the corresponding page, select the large/mild scale campus deployment, fill in the required device models and quantities according to device roles, and then click \[save] to finish the network topology pre planning the controller will generate the network topology based on the entered information generated topology users can click the \[edit] button on the device end and fill in the corresponding information in the slide out panel on the right mac uniquely select a device via its mac address loopback0 ip configure the ip address for the device's loopback0 interface, which will be used for in band management of the device hostname configure the hostname of the device device role assign the device role as spine or leaf inter port neighbor port the interface on the peer device interconnected with the current device's local interface neighbor select the peer device connected to the local interface local port the interface on the current device upon completing all configurations, click \[save] in the upper right corner of the page, then select \[confirm] in the pop up window basic network basic network click the top right corner \[basic network] business network business network management address segment configure an in band management network for convergence devices since both spine and leaf devices are layer 3 devices, the loopback0 address can be used for in band management however, convergence devices are layer 2 devices and require a vlan interface to serve as the in band management interface the controller can assign an in band management address to each convergence device based on the address segment entered by the user peerlink vlan configure a peerlink vlan for convergence devices the directly connected interfaces between two devices are referred to as peer link interfaces, which are primarily used for transmitting protocol packets and forwarding traffic in the event of a failure the vlan dedicated to the peerlink interface is the peerlink vlan peerlink ip configure an ip address on the peerlink vlan interface after setting the peerlink ip, the device knows which ip address to send control packets to for communication with the peer device the ip addresses of the two peer convergence devices must be within the same subnet server network server network configure the in band management network for the server area leaf devices configure the peerlink interface vlan and peerlink ip egress router egress router click \[create] , select the interface id of the spine device's uplink interface, and configure the ip address as per the service plan to ensure normal network operation, a default route typically needs to be configured, with the next hop ip set as the peer ip address of the spine uplink interface device device ntp configure the ntp server ip address as the controller's address to provide a unified, accurate, and reliable time reference for the devices switch configuration switch configuration switch configuration switch configuration click \[create] on the right to set up the switch configuration default zone default zone leaf1 leaf1 name user defined device select the access 1 device configuration type :select default procedure description step 1 dhcp relay dhcp relay since the dhcp server is deployed on the spine and is not directly connected to the service devices on leaf1, a dhcp relay needs to be configured click \[create] , enter the dhcp server ip in the pop up page, and then click \[add] after completion since dhcp servers are deployed on both spine devices with dhcp failover configured, two dhcp server ip addresses need to be entered step 2 business vlan business vlan deploy wireless service configuration on leaf1 and set up the service gateway configure the ap management vlan configure the wireless business vlan ip enter the service gateway address access/trunk select the mode based on whether the interfaces send and receive frames with vlan tags trunk receives tagged frames typically configured for wireless service vlans access receives untagged frames typically configured for the ap management vlan and wired service vlans members click the dropdown arrow to select the member interfaces for the vlan on the device step 3 poe poe the access switch features poe functionality, which can be directly enabled in the wired service configuration to supply power to pd devices click \[create] select the interface where the poe function is to be enabled and set the startup delay time poe delay this refers to a brief, intentional time delay introduced at a poe switch port between when it begins to supply power and when it actually delivers power to the powered device (pd) once all configurations are completed, click \[save] in the top right corner to finish configuring leaf1 leaf2 leaf2 procedure description step 1 dhcp relay dhcp relay same as leaf1 step 2 business vlan business vlan deploy wired service configuration on leaf2 and set up the service gateway step 3 wired clients information collection wired clients information collection interfaces with this feature enabled will report information about the connected wired terminals to the controller server zone server zone leaf leaf procedure description step 1 link aggregation link aggregation click \[create] enter the link aggregation id and members in the pop up view link aggregation id users can create an id within the range of 1501 2000 as needed mode static/lacp, select whether the link aggregation mode is static or lacp dynamic negotiation members select the member interfaces connected to this service server step 2 business vlan business vlan click \[create] fill in the relevant information in the pop up view vlan users can enter a vlan id between 2 and 4050 according to the service plan members only lag interfaces configured in link aggregation can be selected as member interfaces click \[save] after completing all configurations spine spine procedure description step 1 dhcp relay dhcp relay since the dhcp service is deployed on the spine, no relay configuration is required on the spine step 2 business vlan business vlan click \[create] vlan corresponds to the service vlan of the server area leaf switch ip enter the gateway ip address of the service vlan as planned broadcast domain select the leaf switch corresponding to the vlan click \[save] after completing all configurations dhcp dhcp the controller allows users to configure the dhcp server function on spine devices after entering the site, click \[configuration] \[switch configuration] \[dhcp] to access the dhcp server configuration interface then, click the \[+] button on the page to create a new configuration create ap management address pool create ap management address pool name user defined network specify the network segment where the ip address assigned by the dhcp server to the dhcp client is located gateway address specify the gateway address assigned by the dhcp server to the dhcp client dns specify the dns server address address pool specify the address range allocated by the dhcp server to dhcp clients lease time specify the ip address lease time click on \[dhcp option] and fill in the relevant information controller ip dhcp options specifically designed for wireless ap discovery controllers, fill in the controller ip address the controller supports configuring mac binding ip function, which users can fill in as needed click \[save] after completing all configurations follow the steps above to sequentially create the dhcp configurations for wireless terminals and wired terminals once all configurations are completed, the dhcp view will appear as shown below wi fi configuration wi fi configuration click \[wi fi configuration] \[+] to configure the necessary basic information for the wireless ap, e g ssid settings, security policy the controller can automatically generate the corresponding the controller supports the configuration of different wireless service configurations, and after the ap goes online, it will determine which configuration should be issued to the ap based on the \[config tag] attributes of the configuration ssid ssid lan lan when the ap is one that has an extended wired interface and is capable of accessing terminals by wired means, such as a panel ap, the user can configure the access method for wired terminals through the configuration in lans upstreamports specify the up link interfaces for wired terminal to access the network through ap, usually it is the interface for ap to connect to the switch, and keep the same with \[upstreamports] in \[ssid] \[advanced] settings, the default is wan downstreamports interfaces for wired terminal access downstream vlan tag whether the wired terminal carries vlan tag vlan id the ap receives messages from wired terminals that add this vlan tag to identify dhcp snooping trusted dhcp snooping trusted interface, if the wired terminal needs to obtain ip address through dhcp service, this switch needs to be on configuration release configuration release switch switch switches support both in band and out of band management methods operation and maintenance personnel can flexibly choose based on current network conditions for devices in the factory default state, whenever either the management port or service port is in an "up" state, they will actively initiate a dhcp request to obtain a temporary management ip address and the ip address of the cloud based controller from the dhcp server they will then connect to the controller to retrieve configuration information once all switches are successfully connected to the controller, click \[topology consistency verification] on the upper right side of the \[design topology] view to confirm whether the generated topology matches the planned topology after verification, the controller can deploy configurations to the switches push basic network configuration push basic network configuration click \[configuration] \[design topology] \[basic network] \[push configuration] to issue the basic configuration for all devices by default, the controller will select all switches click the \[next] \[start] button to start issuing basic network configurations for the switches push switch configuration push switch configuration switch configuration switch configuration on the \[configuration] \[switch configuration] view, select the configuration to be deployed and click the \[push configuration] button in the pop up window, click \[next] \[start] to deploy the switch configuration to the switch dhcp dhcp on the \[configuration] \[switch configuration] \[dhcp] interface, select the configuration to be deployed and click the \[push configuration] button to deliver the configuration ap ap the ap does not need to manually issue the configuration after the configuration of the device is issued and takes effect, the poe power supply function of the switch is turned on, and the ap can power on and work when the ap connects to the controller with the information obtained through the dhcp service, the controller will automatically send the configuration to the corresponding ap based on the comparison between the tag identification stored in the ap inventory and the tag identification in the planning configuration
