What factors affect website/application page loading speeds?
The page loading speed of an online web application or website is impacted by many factors.

Of which, here's a "high-level" list of 10 factors:

   1. Available network bandwidth of the user's Internet Service Provider (ISP)
   2. The available free memory of the device loading the webpage
   3. The user's browser being used
   4. The CPU and GPU of the user's device
   5. The size of the assets needed to show the webpage
   6. The number of HTTP connections needed to be established to download those assets
   7. The time taken by the server to process and respond to the request
   8. The location of the server and assets, contributing to the round-trip time of request/response
   9. The number and complexity of the DOM elements on the webpage
  10. The protocols used to communicate between the application's server and the user's browser

Of the list above, a web application developer like us, can control items #5-10.

To optimize webpage loading speed we continuously analyze, optimize and implement different methods to get the best results. One of those methods is called, "Load Balancing".

What is "Load Balancing"?

Load balancing is a key component of highly-available infrastructures commonly used to improve the performance and reliability of web sites, applications, databases and other services by distributing the workload across multiple servers.

A web infrastructure with no load balancing might look something like the following:

In this example, the user connects directly to the web server, at yourdomain.com. If this single web server goes down, the user will no longer be able to access the website. In addition, if many users try to access the server simultaneously and it is unable to handle the load, they may experience slow load times or may be unable to connect at all.

This single point of failure can be mitigated by introducing a load balancer and at least one additional web server on the backend. Typically, all of the backend servers will supply identical content so that users receive consistent content regardless of which server responds.

In the example illustrated above, the user accesses the load balancer, which forwards the user's request to a backend server, which then responds directly to the user's request.

These forwarding rules will define the protocol and port on the load balancer itself and map them to the protocol and port the load balancer will use to route the traffic to on the backend.

How does the load balancer choose the backend server?

Load balancers choose which server to forward a request to based on a combination of two factors. They will first ensure that any server they can choose is actually responding appropriately to requests and then use a pre-configured rule to select from among that healthy pool.

Health Checks

Load balancers should only forward traffic to "healthy" backend servers. To monitor the health of a backend server, health checks regularly attempt to connect to backend servers using the protocol and port defined by the forwarding rules to ensure that servers are listening. If a server fails a health check, and therefore is unable to serve requests, it is automatically removed from the pool, and traffic will not be forwarded to it until it responds to the health checks again.

Load Balancing Algorithms

The load balancing algorithm that is used determines which of the healthy servers on the backend will be selected. 

This is How Prosperna's Highly Available Infrastructure Looks

NOTE: Prosperna's Datacenter is located in Singapore. NOT the United States.