Categories
More Digital Marketing

Google Managing the Loads via load Balancing

No service could be promisingly declared as 100% available – many times. This is because the clients can either be inconsiderate or the existing demand may be growing fifty-fold. So, a service might be crashing in response to a traffic spike, or an anchor might be pulling up a transatlantic cable. 

Google Managing the Loads via load Balancing

Moreover, there are people interested in Qb Hosting who will promisingly be depending upon your services, and they may be also be acting as the well-renowned service owners – we are here – taking care well about our users.

When they are facing these chains of outage triggers while performing some cloud computing activities, how can we be making our infrastructure – well adaptive and reliable – if possible?

Henceforth, we will confidently be describing Google’s approach of managing the traffic with the only hope that you will be using these best practices to improving the reliability, efficiency, and availability of your services. 

Over the years, we have sincerely discovered that there will be no single solution for equalizing and stabilizing network load. Instead, we will be using a combination of technologies, tools, and strategies that will be working in harmony – to keeping our services reliable.

Those Cloud Load Balancing techniques offered by Google

In these 2021 days, most companies don’t prefer to develop and maintain their global load balancing solutions. Instead, they prefer opting to use load balancing services from a larger public cloud provider.

We’ll surely be discussing Google Cloud Load Balancer (GCLB) as a concrete example of large-scale load balancing, but nearly all of the best practices will also be applied to other cloud providers’ load balancers.

Conveniently, Google has spent the past eighteen years in building its world-class infrastructure for making the offered services fast and reliable.

Today we will now be able to use these systems for serving Maps, YouTube, Gmail, Search, and many other products and services. Even GCLB is that publicly consumable global load balancing solution and has externalized well with the internally developed global load balancing systems.

Henceforth, the developers will feasibly be describing the components of GCLB and how they will award-winningly be working together for serving the incoming user requests.

We have also been tracing a typical user request — like performing the steps of Cloud QuickBooks hostingfrom its creation to delivery at its destination. If we dig deeper towards the case study of The Niantic Pokémon GO, this has provided a concrete implementation of GCLB in the real world.

Instead, we might also be using anycast, it is a method for sending clients to the closest cluster – without relying completely – upon DNS geolocation.

Furthermore, Google’s global load balancer is aware of where the clients are located currently, and then, this will be directing packets to the closest web service. But the condition is that the connected users must be allocated with the low latency rates – when they will be using a single virtual IP (VIP).

This will be increasing the time to live (TTL) of the existing DNS records, which will further be reducing latency.

# Way Number One – Anycast

Anycast is that undefeatable network addressing and routing methodology which will assertively be routing datagrams from a single sender to the topologically nearest node amongst the group of potential receivers-  all those can feasibly be identified by the same destination IP address. 

Furthermore, Google has also announced its IPs via Border Gateway Protocol (BGP) from multiple points in the supporting networks. We, henceforth be relying, on the BGP routing mesh – this will be delivering packets from a user [interested in QuickBooks Remote Desktop Services] to the closest frontend location.

anycast logo black color with white background

At this location, the transmission control protocol (TCP) session will non-anonymously be terminated. This deployment will be eliminating the problems of unicast IP proliferation and therefore, helping the developers find the closest frontend for the current user. 

This is quite possible that two main issues will be remaining unsolved. The first one – Too many nearby users may be overwhelming a frontend site. While the second one may be – BGP route calculation might be resetting the performing connections.

Therefore, considering an ISP that will frequently be recalculating its BGP routes so that one of its users will be inclining to either of those two frontend sites. Each time the BGP route is flapped, all in-progress TCP streams are chosen to reset as the unfortunate user’s packets are now being directed to a new frontend – with no TCP active session state.

For addressing those unavoidable problems, we have been leveraging our connection-level load balancer, i.e. Maglev (we will be describing this shortly), for cohering TCP streams even when routes are flapped. This technique should be renamed as stabilized anycast.

# Way Number Two – Stabilized anycast

Google has successfully been able to implement the stabilized anycast using Maglev, this is the custom load balancer [of Google]. For stabilizing anycast, each Maglev machine will be encompassed with a way to mapping client IPs –  the client may question the merits of Qb Cloud [anytime] – to the closest Google frontend site. 

Google Managing the Loads via load Balancing

Note: The Maglev machines at the closest frontend site which simply be treating the packet as they may be acquiring any other packet, and then, routing the same – to a local backend.

Sometimes Maglev may not hesitate in processing a packet destined for an anycast VIP for a client – who will merely be closer – to another frontend site. In this case, Maglev will then be forwarding that packet – to another Maglev – on a machine located at the closest frontend site for delivery. 

# Way Number Three – Maglev

Maglev, shown in this below figure, is that custom distributed Google’s packet-level load balancer which will uncontrollably be merging its parts over the cloud architecture. Even the machines of Maglev must surely be managing incoming traffic to a cluster. Therefore, they will promisingly be providing some stateful TCP-level load balancing(s) across their existing frontend servers. 

Google Managing the Loads via load Balancing

One must not be avoiding the fact that Maglev will be differing from other traditional hardware load balancers. Some of those key-ways will be like: –

  • All packets destined for a given IP address will be evenly spreading across a pool of Maglev machines via Equal-Cost Multi-Path (ECMP) forwarding. This will surely be enabling many of the QuickBooks Cloud experts or other cloud developers to boost the current Maglev capacity – which may simply be achieved – by successfully adding servers to a pool. 

Note: Spreading packets may evenly also be enabling Maglev redundancy, which has to be modeled [as N + 1], so this will be able to enhance availability plus the reliability over traditional load balancing systems (which typically be relying actively on active/passive pairs for delivering One [1] + One  [1] redundancy).

  • Maglev is that Google custom solution with which we may be controlling the current systems end-to-end, thereby allowing the professionals of the cloud aspirants to experiment and then, iterate quickly.
  • Maglev will promisingly be running on commodity hardware in the eye-catching datacenters, which will greatly be simplifying the required deployment.

Maglev packet delivery will be using consistent hashing and connection tracking. These techniques will be coalescing the TCP streams at the relevant HTTP reverse proxies (they may also be known as Google Front Ends or GFEs), which will undoubtedly be terminating the TCP sessions. 

Moreover, Consistent hashing and connection tracking are the required keys to Maglev’s ability, which will be scaled by packet – rather than by – the number of acquired TCP connections. When a router agrees on receiving a packet destined for a VIP hosted by Maglev, the router will then be forwarding the packet – to any Maglev machine – in the cluster [through ECMP]. 

When a Maglev has successfully been able to receive that packet, it will then be computing the packet’s Five [5-tuple hash1] and then, be looking up the hash value in its connection tracking table. This will be containing routing results for recent connections.

If in case, Maglev has found a suitable match and the selected backend service is still healthy, it will be reusing the connection –  at that instance. 

Otherwise, Maglev will be falling back to consistent hashing then, choosing a backend. The combination of these mission-critical techniques will bravely be eliminating the need of sharing the connection states – amongst the individual Maglev machines.

Concluding the load balancing techniques

In Google’s experience, there are – still – no perfect traffic management configurations. Autoscaling might be proved a powerful tool – here, but this may easily go wrong. 

Unless the tool may carefully be configured, autoscaling could be resulting in disastrous consequences—for instance, potentially catastrophic feedback repetitively cycling between load balancing, autoscaling, and load shedding – when these tools may be undergoing configurations in isolation. 

With the shortly-mentioned Pokémon GO case study, traffic management will boldly be working best – when the same – will be based upon – a holistic view of the interactions – occurring amongst the systems.

Even such systems can be the ones highlighting the achievements of QuickBooks Hosting Providers.

Concisely, the mitigation strategy might be involving setting flags, enabling expensive logging, changing default behaviors, or exposing the current value of parameters. We must be trying not to avoid the traffic management systems – proudly accepting Google’s load balancing techniques – for making the required decisions.