How does nlb work
Each server in the NLB cluster utilizes and transmits heartbeat messages to identify the state of the cluster. The heartbeat message holds information on the state of the cluster, and the cluster configurations and associated port rules.
Servers in a NLB cluster, as mentioned previously, are called hosts. Each NLB cluster has a default host. The default host is the server in the cluster that has the highest priority. A priority is a unique number assigned to each host in the NLB cluster. The priority determines the handling priority for all requests which are not specifically load balanced according to port rules.
The load weight factor can be used to distribute client requests between the hosts in the NLB cluster. You can configure heavy loads for robust servers, and configure light loads for servers that do not have great processing power. If no load weight is defined, then all servers in the NLB cluster have equal load weights. The lowest weight which can be assigned is 0, and the highest is Traffic distribution is the terminology used to refer to how client requests are distributed in the NLB cluster.
As mentioned previously, all client requests are sent to the cluster IP address, and are received by each host in the NLB cluster. The driver manages and assigns client requests to a server in the NLB cluster based on the specified port rules. Port rules direct traffic on specific ports to certain hosts in the cluster. The servers in a NLB cluster send heartbeat messages to determine the state of the cluster.
Each heartbeat message contains the following information:. Whenever servers are added to or removed from an NLB cluster, a process known as convergence occurs. Convergence enables the NLB cluster to reconfigure itself so that its configuration can include the server s which were added or removed.
Convergence also takes place when a server in the cluster does not send a heartbeat message within 5 seconds. During the convergence process, all client requests are still handled by the NLB cluster. The NLB cluster can automatically detect when one server has a failure, and then reroutes traffic requests to the other servers in the cluster which are online. This leads to enhanced availability for mission-critical applications. The performance of applications can be scaled because client requests are distributed between multiple servers in the NLB cluster.
You can easily add additional servers to an NLB cluster as the network expands. The NLB cluster does not have to be shut down to add or remove servers from the cluster. Client requests to the NLB cluster are load balanced, based on the processing configuration specified for the cluster.
You can also configure port rules to specify any servers that should process specific requests. Through configuring port rules, you can specify how client requests are processed by the servers in the NLB cluster. A port rule basically acts a filter on a specific port s.
You can specify a protocol parameter and a filtering mode to configure the manner in which traffic must be load balanced between servers in the NLB cluster. If you're internationally taking a node offline, then you can use the drainsstop command to service all the active connections before you take the node offline.
You can have a mix of applications running in the NLB cluster. This way you can designate the traffic for database to SQL server node only. NLB can load balance multiple requests from client on the same node or different node. This is done randomly. NLB automatically detects and removes the failure of NLB Node but it can't judge whether an application is running or stopped working. This should be done manually by running a script.
Automatically load balances when new hosts are added or removed and this is done within 10 seconds. NLB can be enabled on multiple network adapters.
This allows you to configure different NLB Cluster. NLB can operate in two modes - Unicast or Multicast but both the modes can't be enabled at the same time. Unicast is the default mode. This traffic is received by all the hosts in cluster and NLB driver filter the traffic as per the Port Rules defined. NLB nodes don't communicate with each other for incoming traffic coming from client because NLB is enabled on all the nodes. A statistically mapping rule is created on each host to distribute incoming traffic.
This mapping remains the same unless there's a change in the cluster for example, node removed or added. Convergence is a process to rebuild the cluster state. This process invokes when there's a change in cluster for example, node fails, leaves, or rejoin the cluster. In this process, the following actions are taken by cluster:. During this process, remaining host continues to handle incoming client traffic. If a host is added to the cluster, convergence allows this host to receive its share of the load-balanced traffic.
Expansion of the cluster doesn't affect ongoing cluster operations and is achieved transparently to both Internet clients and to server applications. However, it might affect client sessions that span multiple TCP connections when client affinity is selected, because clients might be remapped to different cluster hosts between connections.
For more information on affinity. All the nodes in cluster emit the heartbeat messages to tell their availability in the cluster. The default period for sending heartbeat message is one second and five missed heartbeat messages from a host cause NLB to invoke Convergence process. We can configure multiple NLB clusters on the same network adapter and then apply the specific port rules to each of those IP addresses. These are referred to as "Virtual Clusters". You should be the member of Administrators group on node for which you're configuring NLB.
You don't need to be an administrator to run the NLB Manager. Intra-host communication is possible only in multicast node. There's no restriction on number of adapters. Different hosts can have different network adapters. NLB can be configured for any machine as long as you have administrative rights on the remote computer. NLB Manager will try to connect to remote hosts using this credential. If you use the same set of load-balanced servers for multiple applications or websites, port rules are based on the destination virtual IP address using virtual clusters.
Direct all client requests to a single host by using optional, single-host rules. NLB routes client requests to a particular host that is running specific applications. Enable Internet Group Management Protocol IGMP support on the cluster hosts to control switch port flooding where incoming network packets are sent to all ports on the switch when operating in multicast mode. NLB logs all actions and cluster changes in the event log. NLB is installed as a standard Windows Server networking driver component.
The following figure shows the relationship between NLB and other software components in a typical configuration. Provides Network Load Balancing Tools to configure and manage multiple clusters and all of the hosts from a single remote or local computer.
Enables clients to access the cluster by using a single, logical Internet name and virtual IP address, which is known as the cluster IP address it retains individual names for each computer. NLB allows multiple virtual IP addresses for multihomed servers. Enables NLB to be bound to multiple network adapters, which enables you to configure multiple independent clusters on each host.
Support for multiple network adapters differs from virtual clusters in that virtual clusters allow you to configure multiple clusters on a single network adapter. Can be configured to automatically add a host to the cluster if that cluster host fails and is subsequently brought back online. The added host can start handling new server requests from clients. Enables you to take computers offline for preventive maintenance without disturbing the cluster operations on the other hosts.
There is no restriction on the number of network adapters on each host, and different hosts can have a different number of adapters.
Within each cluster, all network adapters must be either multicast or unicast. NLB does not support a mixed environment of multicast and unicast within a single cluster. If you use the unicast mode, the network adapter that is used to handle client-to-cluster traffic must support changing its media access control MAC address.
0コメント