All posts by Pramod T P

Fail Over

Failover means switching to another machine when one of the machines fails.

Failover is an important technique in achieving high availability. Typically a load balancer is configured to fail over to another machine when the main machine fails.

To achieve the least downtime, most load balancers support a feature of heartbeat check. This ensures that the target machine is responding. As soon as a heartbeat signal fails, load balancer stops sending the request to that machine and redirects to other machines or cluster.

IP Address Affinity

IP address affinity is another popular way to do load balancing. In this approach, the client IP address is associated with a server node. All requests from a client IP address are served by one server node.

This approach can be really easy to implement since the IP address is always available in an HTTP request header and no additional settings need to be performed.

This type of load balancing can be useful if your clients are likely to have disabled cookies.

However, there is a downside to this approach. If any of your users are behind a NATed IP address then all of them will end up using the same server node. This may cause an uneven load on your server nodes.

NATed IP address is really common, in fact, anytime you are browsing from an office network its likely that you and all your coworkers are using same NATed IP address

Sticky Session

In a load balanced server application where user information is stored in the session, it will be required to keep the session data available to all machines. This can be avoided by always serving a particular user session request from one machine.

The machine is associated with a session as soon as the session is created. All the requests in a particular session are always redirected to the associated machine. This ensures the user data is only at one machine and load is also shared.

In the Java world, this is typically done by using a jsessionid cookie. The cookie is sent to the client for the first request and every subsequent request by a client must be containing that same cookie to identify the session.

There are few issues that you may face with this approach

  • The client browser may not support cookies, and your load balancer will not be able to identify if a request belongs to a session. This may cause strange behavior for users who use no cookie based browsers.
  • In case one of the machines fails or goes down, the user information (served by that machine) will be lost and there will be no way to recover user session.

Twelve-Factor App principles

The Twelve-Factor App methodology is a methodology for building software as a service applications. These best practices are designed to enable applications to be built with portability and resilience when deployed to the web.

  • Codebase – There should be exactly one codebase for a deployed service with the codebase being used for many deployments.
  • Dependencies – All dependencies should be declared, with no implicit reliance on system tools or libraries.
  • Config – Configuration that varies between deployments should be stored in the environment.
  • Backing services All backing services are treated as attached resources and attached and detached by the execution environment.
  • Build, release, run – The delivery pipeline should strictly consist of build, release, run.
  • Processes – Applications should be deployed as one or more stateless processes with persisted data stored on a backing service.
  • Port binding – Self-contained services should make themselves available to other services by specified ports.
  • Concurrency – Concurrency is advocated by scaling individual processes.
  • Disposability – Fast startup and shutdown are advocated for a more robust and resilient system.
  • Dev/Prod parity – All environments should be as similar as possible.
  • Logs – Applications should produce logs as event streams and leave the execution environment to aggregate.
  • Admin Processes – Any needed admin tasks should be kept in source control and packaged with the application.

SOLID

S.O.L.I.D is an acronym for the first five object-oriented design (OOD) principles by Robert C. Martin.

  • S – Single-responsiblity principle. A class should have one and only one reason to change, meaning that a class should have only one job.
  • O – Open-closed principle. Objects or entities should be open for extension, but closed for modification.
  • L – Liskov substitution principle. Let q(x) be a property provable about objects of x of type T. Then q(y) should be provable for objects y of type S where S is a subtype of T.
  • I – Interface segregation principle. A client should never be forced to implement an interface that it doesn’t use or clients shouldn’t be forced to depend on methods they do not use.
  • D – Dependency Inversion Principle. Entities must depend on abstractions not on concretions. It states that the high level module must not depend on the low level module, but they should depend on abstractions.

Continuous deployment

Continuous deployment goes one step further than continuous delivery. With this practice, every change that passes all stages of your production pipeline is released to your customers. There’s no human intervention, and only a failed test will prevent a new change to be deployed to production.

Continuous deployment is a strategy for software releases wherein any code commit that passes the automated testing phase is automatically released into the production environment, making changes that are visible to the software’s users.

Continuous deployment eliminates the human safeguards against unproven code in live software. It should only be implemented when the development and IT teams rigorously adhere to production-ready development practices and thorough testing, and when they apply sophisticated, real-time monitoring in production to discover any issues with new releases.

Continuous delivery

Continuous delivery is an extension of continuous integration to make sure that you can release new changes to your customers quickly in a sustainable way. This means that on top of having automated your testing, you also have automated your release process and you can deploy your application at any point of time by clicking on a button

Getting software released to users is often a painful, risky, and time-consuming process.

Continuous Delivery can help large organizations become as lean, agile and innovative as startups. Through reliable, low-risk releases, Continuous Delivery makes it possible to continuously adapt software in line with user feedback, shifts in the market and changes to business strategy. Test, support, development and operations work together as one delivery team to automate and streamline the build-test-release process.

continuous integration

Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.

By integrating regularly, you can detect errors quickly, and locate them more easily.

continuous integration merge their changes back to the main branch as often as possible. By doing so, you avoid the integration hell that usually happens when people wait for release day to merge their changes into the release branch.

Continuous Integration is cheap. Not integrating continuously is expensive. If you don’t follow a continuous approach, you’ll have longer periods between integrations. This makes it exponentially more difficult to find and fix problems. Such integration problems can easily knock a project off-schedule, or cause it to fail altogether

Continuous Integration brings multiple benefits to your organization:

  • Say goodbye to long and tense integrations
  • Increase visibility enabling greater communication
  • Catch issues early and nip them in the bud
  • Spend less time debugging and more time adding features
  • Build a solid foundation
  • Stop waiting to find out if your code’s going to work
  • Reduce integration problems allowing you to deliver software more rapidly