How Can We Help?
< All Topics
Print

Application Server with horizontal scaling

Application Server Horizontal Scaling

The first thing you can notice while increasing the number of application server instances – is the automatically added Load Balancer node (NGINX by default), which appears in your topology wizard:

Such a server is obligatory required in order to work with the multiple compute nodes, as it is virtually placed in front of your application and becomes an entry point of your environment. Load balancer key role is to handle all the incoming users’ requests and distribute them among the stated amount of app servers.

Load distribution is performed with a help of the HTTP balancing, though you can optionally configure the TCP balancing as well (e.g. due to your application requirements, in order to achieve faster request serving or in case of the necessity to balance non-HTTP traffic).

It’s also vital to note, that each newly added application server node will copy the initial (master) one, i.e. it will contain the same set of configurations and files inside. So, in case you already have several instances with varying content and would like to add more, the very first node will be cloned while scaling.Tip: You are also able to automate application server horizontal scaling based on incoming load with the help of tunable triggers.

In addition, if your application server includes the administrator panel (like GlassFish, for example), you can Loginusing the appropriate expandable list:

Also, within the same menu, you can Reset passwords in order to regain administrator access to your cluster.

Table of Contents