Cloudifying your legacy applications
Nikolett Hegedüs

Cloudifying your legacy applications

Nikolett Hegedüs

In this article we will be dealing with OpenShift and Kubernetes technology. You can find some explanations about the terms used at the end of the article.


If you want to take the neccessary steps to upgrade your own application, the first thing to do will be turning your pile of code into a container image.

 

The first step: containerization

The Source-to-Image (s2i) method:

Source-to-Image is an OpenShift tool for building reproducible Docker images by injecting source code into a Docker image. The new image will include the base (builder) image and the injected code.

What are the advantages?

  • There will be no Dockerfile, so there won’t be any container-specific junk in the code
  • The commands running inside the container won’t run as root (so it’s safe for the enterprise)
  • Operations teams can inspect and control s2i builders for security
  • You can build your own custom builder image on top of an existing stock builder image (by adding layers to the base image)

How does it work?

$ s2i build

Advantages of building a custom builder image:

  • Reduced build time: you don’t need to download the dependencies, you can simply pre-package them into the builder
  • But beware! Leave some flexibility for the developers, so they can add new modules at runtime when possible
  • It can be done within a week (depending on the application)
  • You can consider using more than one container, but its an app-by-app decision

 

Second step: configuration

When it comes to configurations there are three options:

  1. Baked-in config (config-and-run)
  2. Self-configuration (run-and-config): the application starts as PID 1 inside the container, looks for its own config and generates one if it doesn’t exist
  3. Configmap method (OpenShift only). The configmap is a Kubernetes object with a name and a set of key-value pairs

Third step: cluster deployments

Using OpenShift: $ oc cluster up

  1. Take a laptop with Docker running on it.
  2. Download the oc binary.
  3. Run oc cluster up.
  4. It stands up an entire cluster, pulls down a registry and a router for running that cluster.

 

This is the fastest way to deploy an OpenShift cluster. You can go from zero to OpenShift in under 20 seconds – but on the first run it will pull down some Docker images.

 

Useful to keep in mind

 

  • Run it once: $ oc new-app -
  • Make it repeatable by building a reusable template that can be used in any namespace in az OpenShift cluster: $ oc export bc,dc,svc,ls –as-template=myapp
  • Make it resilient through liveness and readiness probes
  • Make it stateful with persistent volumes and persistent volume claims

 

Some explanation

Liveness check: You can try and load the homepage of your application. If it won’t load, kill the pod (or container) and start a new one.

Readiness check: It is useful for checking if the app is capable of doing the jobs it’s supposed to do? Can it handle requests?

Pod: It is a group of one or more containers in Kubernetes. A pod also contains the shared storage for those containers and options about how to run the containers.

Persistent volume: It is a piece of networked storage in the cluster that has been provisioned by an administrator.

Persistent volume claim: It is a request for storage by a user.

 

Have some fun at the end!


If you like role-playing games or web applications generating random data about fictional planets and characters, you will enjoy this.

http://swn.emichron.com/

Share your ideas with us about this article

Previous posts

From Monolith to Microservices in 10 Steps
Do you have a monolithic application (for example a complex server-side enterprise application) with big features like support varieties of different clients, API for 3rd parties and some integrations with other web services and message brokers? Code usage is tolerable, but you want to release a smashing feature in the future, though you do not know how to manage the code, the integrations, the changes, etc.? If you are using monolithic applications, you might have been wondered about the following questions: How to manage the codebase? How to scale our application for...
New versions released
In the last 2 weeks, we released 2 new versions of BitNinja. Let’s take a look at the novelties:    BitNinja version 1.12.10: CaptchaChallenge pages now use 403 status code instead of 200. Good bots will notice it and leave it. This means, that the good bots will recognize our captcha pages, and won’t walk around them. Causing that our already low false-positive rate will be further reduced. WordPress wp-login filter threshold increased to 100 attempt. Our log analyser module (SenseLog) perceives a wordpress page update as a wp-login.php request, so we increa...