Front End pattern

Jump to: navigation, search
Next: Proxy pattern | Return to ACP Patterns

This pattern first appeared in the paper "A Pattern Language for Application-level Communication Protocols"[1] published at IARA-ICSEA 2016 by Jorge Edison Lascano and Stephen Wright Clyde (authors of the page).

Front End pattern

Name and Overview

Front-End (F-E)

The F-E pattern is concerned with decoupling the initiator from the responder, in this sense, the Front End process offers an interface to the initiator, but, it allows B to reply back directly to A. The localization of the Resource Manager is handled by the Front-End process to provide alternative implementations in case of extreme load or failures of the resource managers.


The Front End pattern addresses the problems of making the location of shared resource transparent to the client, allowing the number of resources to change dynamically. It has a resource client send requests to a front-end process that automatically redistributes them to appropriate resource managers, B processes. After processing the request, a resource manager replies back to the client directly. The front-end process can use a variety of criteria to decide how to redistribute requests, including request type, resource type or identity, and resource manager load. By itself, this pattern’s primary focus is on the distribution and scalability of resources.



A client needs to access a shared resource or service replicated across one or more resource managers, with a modest degree of reliability.


There is no need to guarantee “at most once” semantics for the execution of the requested service. The system needs to be scalable or handle spikes in load.


The solution is similar to RR, except the request is directed to a front-end process, which then delegates it to one of the resource managers. The resource manager responds back to the requesting process directly.

Process Roles

Initiator, Front End, Responder


Request, Reply, Initiatior End Point


Initiator -> |Request| -> Front End
Front End -> |Request and Initiator End Point| -> Responder
(Responder processes the Request)
Responder -> |Reply| -> Initiator

Semantics and Behavior

  • The Initiator and Responder have the same behavior as in the RR pattern
  • The Front-end forwards the Request with the End Point of the Initiator to the Responder
  • The Responder replies direclty to the Initiator
  • The End Point has to be accessible to the Responder, so the Front End and Responder should be on the same network
  • The Front-end does not need to track the conversation


One message is sent from A to B with a certain level of reliability and synchronicity through a third process (F-E) that handles A's request. B sends back a reply message to A. This pattern provides a high level of scalability.

Quality Rating Justification
This pattern does not directly address Reliability.
This pattern does not directly address Synchronicity.
This pattern does not directly address Longevity.
Adaptability for Scalable Distribution
This pattern qualifies under high-level of scalable distribution. It fulfills all the three criteria qualifications as for example with NIGINX, which is a load balancer that use names of web servers rather than their actual addresses to direct web requests.

a. This pattern provides absolute location transparency to and from multiple hosts.
b. Load Balancing on replicated independent processes is one of the main features of Front End pattern. For example, in NIGINX, we observed that it provides a variety of load balancing methods to distribute load on worker processes.
c. Front End pattern may or may not untangle cross-cutting concerns. For example, NIGINX separates out tangled cross-cutting concerns such as authorization, health monitoring from worker nodes.

Known uses

  • Load balancers
  • authorization services

Aliases and Related Patterns

  • Requestor

Related work

  • Media Type Negotiation [2] [3]
  • Requestor pattern, as part of the Distribution Infrastructure section in Pattern-Oriented Software Architecture Volume 4, (POSA 4), chapter 10[4]

Examples of Application

Occurrences of this pattern con be found in implementations of load balancers as is the case of NGINX. NGINX is an open-source light weight web server or proxy server. It load balance requests across multiple application instances for optimizing resource utilization, maximizing throughput, reducing latency and ensuring fault-tolerant configurations. [5] [6]

- NGNIX contains configurations for multiple application instances. The configurations for these application instances can be static or can be created on the fly. It can use a variety of load balancing methods such as round-robin, ip-hash, least-connected server, generic hash, or server with lowest average latency. NGNIX as a front-end can support many other rich features besides just load balancing, such as re-routing the requests according to their supported sessions, authorization(s), health monitoring of the running servers, and identifying malicious request

NGINX Scenario

AC1- Any web-based client request first goes to the NGNIX load balancer
AC2- Based on the request header “Host”, NGINX decides which of the listed server(s) will process the request
AC3- In case of multiple hosts, it can apply one of the load-balancing methods described above.
AC4- If NGINX couldn’t find any better information, it will re-route the default server
AC5- Once a relevant server process the request, it sends back the request to the client and doesn’t re-route to NGINX first
Next: Proxy pattern | Return to ACP Patterns


  1. Jorge Edison Lascano, Stephen Wright Clyde, A Pattern Language for Application-level Communication Protocols, in, Proceedings of The Eleventh International Conference on Software Engineering Advances ICSEA 2016, 2016, pp. 22-30
  2. “Service Design Patterns - Client-Service Interactions - Media Type Negotiation.” [Online]. Available: [Accessed: 11-Mar-2017].
  3. R. Daigneau, Service Design Patterns: Fundamental Design Solutions for SOAP/WSDL and RESTful Web Services, 1 edition. Upper Saddle River, NJ: Addison-Wesley Professional, 2011.
  4. F. Buschmann, K. Henney, and D. C. Schmidt, Pattern-Oriented Software Architecture Volume 4: A Pattern Language for Distributed Computing, Volume 4 edition. Chichester England; New York: Wiley, 2007.
  5. “NGINX Load Balancing - HTTP and TCP Load Balancer,” NGINX. [Online]. Available: [Accessed: 20-Feb-2017].
  6. “How nginx processes a request.” [Online]. Available: [Accessed: 20-Feb-2017].