Tuesday, December 29, 2015

Docker Machine Introduction

This is a short post that introduces the docker-machine tool, which is part of the docker toolbox.

Overview

Docker-machine is a command line interface (CLI) dedicated to manage virtualization environments in order to provision a minimal docker environment.

Virtualization environments can be hypervisors (VMware, VirtualBox)  or cloud providers (AWS, Azure, etc) . Docker-machine uses drivers to manage virtualization environments.You can find the complete list of supported drivers here.

Fig. 1 - Local and Remote Virtual Machines

Since Docker only runs on Linux, each Virtual Machine is created with a Linux based Operating System. For local VM, the base OS is boot2docker and for remote VM within a cloud, the base OS is Ubuntu.

Fig. 2 - Virtual Machine creation
You can choose other Operating System (list) from:

  • RancherOS
  • Debian
  • RedHat Enterprise Linux
  • CentOS
  • Fedora

but be aware that, except for RancherOS, all other OS are considered as experimental.

When using a cloud provider, it's possible to specify the desired image that will serve to create the Virtual Machine. You will then use the corresponding driver option.


Finally, docker-machine will configure the docker client to connect to the docker engine created previously:

Fig. 3 - Docker client configuration

The configuration is done through environment variables within the current user session:

  • DOCKER_TLS_VERIFY
  • DOCKER_HOST
  • DOCKER_CERT_PATH
  • DOCKER_MACHINE_NAME

The docker client will retrieve the values of these environment variables and use them to establish a connection to the Docher Engine and manage it.

Installation

Docker-machine can be installed through the docker toolbox package (recommended) or directly following this documentation.

Friday, May 8, 2015

Artifact life cycle through obsolescence and modernization

This is a short article that deals with artifact life cycle through obsolescence and modernization. It also gives some hints when designing artifacts to delay obsolescence and anticipate modernization.

Artifact life cycle

The following diagram illustrates the life cycle of any artifact:



Artifact life cycle starts with the implementation of a modern artifact.

But time goes on, and leads inevitably to a software erosion :
  • the artifact does not match technology standards,
  • users experience is degraded because of too old user interfaces,
  • huge technical debt, 
  • security vulnerabilities,
  • insufficient features,
  • high TCO

At this moment, there is 2 possible choices: modernize the artifact or build a new one that meet the standards and finally end the cycle with the process of decommissioning of the artifact.

Modernization will be realized through rewriting code, replacing frameworks, platforms and designing new architectures. It is preferable to automate the process of modernization.

Delay obsolescence

To prevent obsolescence to occur too soon, you can try to:
  • carefully build your software stack to rely on proven frameworks that will last for at least several years
  • set up a continuous inspection platform to measure the software code quality and keep a high maintainability of the source code
  • rely on an efficient software factory to build, assemble and deploy. That way, it is easy to evolve and maintain your artifacts.
  • standardize your practices and architecture choices to avoid the maintenance of a too heterogeneous set of technologies across your artifcats. 

Anticipate modernization

To simplify the process of modernization, you can:
    • favor simple design to ease the reverse engineering activity when modernization team works on artifacts
    • design by abstraction to easily replace one specific part, without rewriting the whole artifact
    • abuse of automated tests to lower the cost of testing changes induced by modernization and spot with ease any regression that modernization could introduce
    • choose frameworks that can be evolved easily