Categories
News

MICROSERVICES: WHAT THEY ARE, FEATURES AND BENEFITS

MICROSERVICES: WHAT THEY ARE, FEATURES AND BENEFITS

What are microservices?

Microservices are a development approach of software architecture emerged especially to develop the modern Cloud-based applications. Whit that model, applications are broken up into smaller autonomous and independent services (microservices precisely), which communicate among themselves through well-defined API, with the aim to simplify the deployment and provide high-quality software programs swiftly. Indeed, microservices allow to scale easily and rapidly, by promoting the innovation and accelerating the development of new features.

Monolithic systems vs microservices

Microservices are in opposition to the traditional monolithic system used to develop standard applications, according to which each component is created into the same element. But monolithic method has its disadvantages. For instance, with large applications is more difficult to solve issues rapidly and deploy new features. With this kind of approach, all processes are connected together and performed as a single service. That means that when there is a peak in the requests, you need to re-size the entire architecture. Consequently, add or edit features become more complicated, and the experimentation and inclusion of new ideas is limited. At the same time, traditional architectures enlarge the risk of application’s availability , because the presence of strictly connected processes dependent from each other increases the impact of mistakes in a single process.

microservices

Microservices approach simplify the resolution of problems and optimize development time. Every process or component constitute a microservice and is performed as an independent service. Microservices communicate through an interface of “light” APIs and interact to complete the same tasks while remaining independent from each other, with the possibility of sharing similar processes among several applications. It is a granular model in which any service corresponds to a business functions and performs only one function. In addition, the independence among services removes the issues about updating, re-sizing and deployment typical of monolithic architectures.

Microservices and containers

Although the architecture based on microservice is not entirely new, the presence of containers make easier to use it. Containers represent the ideal development environment for application which use microservices because they allow to execute different components of the application independently, on the same hardware and operating a superior control on software lifecycle. Containers provide to microservices a self-sufficient environment in which the management of services, storage, network and security is easier. For these reasons, microservices and containers together form the basis for the development of Cloud-native applications. By entering microservices into containers, you can accelerate development and facilitate transformation and optimization of existing apps.

Benefits of microservices

Microservices, when implemented correctly, enable to improve the availability and scalability of applications. As we have seen above, one of the most compelling aspect of microservices compared to monolithic systems, is that a bug in a single service cannot affect other services and cannot compromise the entire application. This and many others are the benefits of microservices: let’s see them together right now.

 
  • No single point of failure
  • High scalability e resilience
  • Time-to-market
  • Easier Deployment
  • Top performance
  •  
  • Freedom in the use of technology
  • More experimentation and innovation
  • Re-usable code
  • Flexibility of development language
  • Greater agility of system
 

… and disadvantages

Don’t forget that microservices, although they represent an innovative and performing development model, have also disadvantages. Those include the complexity associated to all distributed systems, the need of more solid testing protocols and the requirement of experienced teams to manage processes and provide technical support. Besides, if the application doesn’t need to scale rapidly or is not Cloud-based, the development architecture might gain poor benefits from microservice.

Facebook
Twitter
LinkedIn

Contact us

Fill out the form and one of our experts will contact you within 24 hours: we look forward to meeting you!

Categories
News

WHAT IS CLOUD HOSTING AND HOW IT WORKS?

WHAT IS CLOUD HOSTING AND HOW IT WORKS?

Cloud hosting: meaning and features

Cloud hosting is a service which ensures the permanence of your website online and all the resources related to space and traffic needed to manage at best your business on the web (CPU, RAM,storage etc.). Unlike the other services, it’s not physical but virtual. That means that Cloud hosting uses Cloud resources to guarantee the maximum availability, better performance and high-scalability. Cloud hosting service is based on virtualization systems designed for websites and internet services which request flexibility and high-performance.

cloud hosting

Cloud hosting features are suitable to any kind of business and to its particular needs in terms of online traffic. All Cloud hosting solutions are high-scalable and able to support the growth of your web project step by step, by assuring high-availability and customized SLA based on performance.

Which are the key benefits of Cloud hosting?

  • High-availability
  • Top performances
  • High-scalability
  • Pay-per use model
  • Customized SLA

Cloud hosting can be activated with the Fully Managed option, which means that the hosting service management is handled by Cloud provider’s specialists. Fully managed Cloud hosting service is beneficial for many reasons. For instance, it represents the ideal solution for firms which have no time to run the management, or for organizations that has no employees with proper IT skills to administrate the service correctly. Thanks to a dedicated team of experts, you won’t have to worry about IT management anymore, removing stress, expenses and time needed to manage it autonomously.

Cloud hosting Kubify with Kubernetes containers

Kubify is the only solution of fully managed Cloud hosting in Italy using Kubernetes containers, and able to ensure maximum performances, scalability and security. Something new is that Kubify is a multidatacenter and hybrid hosting solution, which takes advantages of the availability of Azure storage and of our Data Centers computing power and scalability to ensure unparalleled performances. Cloud hosting Kubify is a Fully Managed “turn-key” service. You don’t need to worry about the infrastructure technical management anymore, but you can develop easily your applications on an all-inclusive platform capable to ensure the highest availability of the service and achieve top-level performance. Besides, Kubify includes a series of additional free services in the hosting offer, such as hourly backups, synthetic monitoring, development and testing environments and more, as well as h24 technical suppor, like all our Cloud services.

Facebook
Twitter
LinkedIn

Contact us

Fill out the form and one of our experts will contact you within 24 hours: we look forward to meeting you!

Categories
News

KUBIFY: WEB HOSTING OF TOMORROW USES KUBERNETES

KUBIFY: WEB HOSTING OF TOMORROW USES KUBERNETES

Why Kubify is a unique and innovative web hosting solution on the market?

Any online project needs a web hosting of high-quality which is able to support the business growth at best. Today we will talk about Kubify, our advanced web hosting solution, designed specifically to get the best performance and availability to enterprises digital projects. In this post we want to show you why Kubify is a one of a kind, and which are its differences with classic web hosting services.

Kubify vs traditional web hosting services

Kubernetes Containers
Kubify is based on Kubernetes Containers. Nowadays containers are the most advanced Cloud technology available on the market and Kubernetes is definitively the leading player when it comes to enterprise infrastructures. Kubernetes containers offer the best availability of service, high-flexibility and modularity, as well as great performance.

Platform as a Service(PaaS)
Ideal for developers, PaaS just takes all that worry about infrastructural system away. Kubify offers an all-inclusive platform able to ensure the highest productivity of the system, on which you can develop your applications easily without damaging downtimes.

Multi-datacenter & Hybrid solution
Kubify has the particularity of being a multi-datacenter and hybrid solution. That means firstly that the availability of service is ensured at maximum levels because of the implementation on multiple data centers. Besides, it’s a hybrid solution, so it offers additional benefits compared to traditional hosting services, because it combines the advantages of both private and public Cloud.

High-availability and scalability
Kubify uses the efficiency of Azure storage together with the high-scalability provided by our Data Center to ensure SLA next to 100%.

“Turn-key” service
Kubify is a fully managed web hosting service, delivered ready-to-use to customers.

But not only that! Here we’ve just mentioned some interesting and innovative features of Kubify, but it has many other benefits. Kubify includes a wide range of additional free services like GIT and staging environments, daily backups, H24 technical support and many more.

 
Facebook
Twitter
LinkedIn

Contact us

Fill out the form and one of our experts will contact you within 24 hours: we look forward to meeting you!

Categories
News

DOCKER UPDATES THAT DEVELOPERS WILL LOVE

DOCKER UPDATES THAT DEVELOPERS WILL LOVE

DockerCon 2019: incoming updates to simplify developers life

Has just ended the San Francisco DockerCon 2019, a three-days event about Docker technology. Every year the conference brings together professionals of the IT world, developers, SystemAdmins and Docker users in general, and has the goal to promote the training and sharing of the latest Docker updates.

In the last 5 years, Docker has become a synonymous with software containers but it does not mean that all developers know the technical details to manage and deploy Docker containers. Indeed, during the DockerCon, it was announced the release of new tools to help developers, who might not be Docker experts, to work easily with containers.

Technology has evolved and the Company have seen containers market broaden, but to take advantage of that it is necessary to make easier work with containers. That is what Scott Johnston said, Chief Product Officer at Docker. The firm is focusing on provide customers with a set of tools to improve Docker usability and allow all users to manage Docker in a simple and fast way. According to what the CPO said, the most users is not a Docker specialist, but must be able to use it easily even without expertise.

beta version of Docker Enterprise 3.0 will be launched soon, and includes several key elements. In the following lines we will describe the main Docker updates in order to simplify developers’ life.

Docker Desktop Enterprise

First of all, Docker Desktop Enterprise allows the IT department to set up a Docker environment with the kind of security and deployment template that make sense for each customer. Then developers can choose the right templates for their implementations, while conforming with governance rules of the company.

Johnston explains that templates have already IT pre-approved configuration settings and container images. IT will provide the templates to developers through these new visual tools. The goal is to streamlining processes and allow developers choose the proper templates without going back for IT approval.

The basic idea is to let developers focus completely on the application development through the provision of Docker pre-built and ready-to-use tools, so they don’t have to worry about technical issues.

Docker Application

Another feature is Docker Applications, which allows developers to create and manage complex container applications as a single package and deploy them on the infrastructure they want, Cloud or on-premise. 5 years ago, when Docker got started, everything was simpler and often involving just one container. Now, with the increasing popularity of micro-services, there is a new level of complexity, especially when they have to deploy large set of containerized applications. Now Operations can programmatically change parameters for containers, depending on the environment, without change the application itself. You can imagine how that make it easier and faster for developers.

Docker Kubernetes Service (DKS)

Finally, the component of container orchestration. Kubernetes is definitely the most popular tool to manage containers. Docker 3.0 integrates in its offer the latest version of Kubernetes, customized specifically to simplify the Docker containers orchestration. The aim is providing to customers a powerful tool and also make it easier to use, as well as full compatible with Docker environment.

For that reason, Docker announced the release of Docker Kubernetes Service (DKS), which has been designed with Docker users in mind and include the support for Docker Compose, a very popular scripting tool for Docker community. From one side, you can use all Kubernetes features, from the other side, you will have at your disposal the DKS as a Docker-friendly version designed for developers.

A common goal

Additional features are related to tools for automated containers deployment on your choice of infrastructure and security enhancements of Docker environment.

All these features has something in common besides being part of Docker Enterprise 3.0. They try to reduce the complexity associated to the use and management of Docker containers, so that developers can focus on application deployment and don’t have to worry about technical details of container infrastructure. At the same time, Docker wants to help Operations team to manage easily Docker containers. When new tools will be available on the market, DevOps teams will judge how well Docker has done. Docker Enterprise 3.0 beta version will be available later this quarter.

Facebook
Twitter
LinkedIn

Contact us

Fill out the form and one of our experts will contact you within 24 hours: we look forward to meeting you!

Categories
News

6 CLOUD TRENDS TO PREPARE FOR 2019

6 CLOUD TRENDS TO PREPARE FOR 2019

In the last decade the Cloud played a leading role in the IT scenario, by increasingly contributing to the technological innovation of businesses and the corporate IT transformation. If at the beginning many were skeptical about moving their processes and applications to the Cloud, now the use of Cloud Computing services appears to be a standard. According to the statistics of Osservatorio Cloud Transformation of School of Management of Milan Politecnico, 82% of medium-large enterprises in Italy uses at least a Public Cloud service. The data show that the companies made the Cloud an integral part of their IT strategy: 25% consider it the favourite solution for new projects, 6% even the only choice. But the Cloud has still a lot to give to business. Trends about the new year confirm it widely, by highlighting the technological issues and opportunities of innovation for enterprises connected to the adoption of the latest Cloud solutions.

1. Multi-Cloud approach: 2018 vs 2019
If 2018 was the year when Multi-Cloud become mainstream, 2019 is the year when Multi-Cloud strategies grow. The use of Multi-Cloud approach by firms will be soon a “must have” to get a competitive advantage and being up-to-date with the market. But let’s back up. What does Multi-Cloud mean? That solution doesn’t use just one Cloud model, for instance Public Cloud or private Cloud, but uses different services and eventually different providers, according to the specific business needs. The future is focused on applications and users demand speed and high performances. A standard solution can’t no longer meet those challenges and each company needs its own custom Cloud ecosystem.

2. 80% of processes move to Cloud
As said above, until recently the scepticism about migration of Information Systems to Cloud was very strong. The reasons for it were a lack of confidence about the security of data and processes and the traditional resistance to change. Nowadays, data show that in the following years 80% of business processes will move to the Cloud, but not only that. Studies say that there is another trend connected to the Cloud migration. Indeed, many companies refactoring their on-premise infrastructures to move them easily to Cloud-native environments.

3. Artificial Intelligence in embedded systems
Artificial Intelligence starts to be embedded in all next-gen applications, by improving systems and processes efficiency. Thanks to the large amount of data at our disposal and the growth of Artificial intelligence’s use to analyze them, is estimated that the businesses productivity will be increased a lot. Together with A.I., other emerging technologies are increasingly growing and help to improve the flexibility and efficiency of systems, like machine learning, serverless computing and augmented reality.

4. 70% of IT functions will be automated
In 2019 automation becomes a keyword. 70% of IT functions will be automated as well as the most of customer care activities that will be managed by chatbot and Artifical Intelligence in a total automated way. In addition, 50% of data will be managed with automated processes. Technical staff can do without performing many routine activies, still necessary, as self-patching, tuning, and very soon also activities related to the availability and SLA , and they will have more time for business development.

5. Containers: Kubernetes on the front line
Already in 2018 Containers had a great role in the Cloud world. They are Cloud-native systems which for their specific features will be increasingly popular as enterprise technology. Containers are able to provide high-level performance, flexibility and scalability, greater security and SLA, and they perfectly suit to any kind of environment. Thanks to those peculiarities they are the ideal system to integrate with Multi-Cloud strategies. Among different Container technologies, Kubernetes is on the front line and represents the most popular model of Cloud Containers orchestration and management at global level.

6. New technical skill needed
The trends create a gap between the skill of providers and businesses IT teams and the ones needed to implement the latest strategies and manage Cloud-native architectures. Thus arise the necessity of training technical staff with up-to-date skills and hire new job figures according to the latest market parameters (for instance Cloud security Specialist or Cloud Architect) to shape the companies of the future.

 
Facebook
Twitter
LinkedIn

Contact us

Fill out the form and one of our experts will contact you within 24 hours: we look forward to meeting you!

Categories
News

4 INNOVATIVE WAYS TO USE DOCKER

4 INNOVATIVE WAYS TO USE DOCKER

We already talked about containers and especially about Docker, the container management tool most used in the IT field lately. In today’s post we will show you 4 ways to use Docker you never though about.

1. Testing
The scientific validity of an experiment is based on its repeatability. Often, to verify the credibility of a new study you need to replicate even the experiment’s working environment and conditions. How to deliver the appropriate tools to check the correctness of scientific publications and papers? Using Docker. Indeed Docker, thanks to the container technique, represents the ideal solutions to get the replicability of experiments, by making available the same working environment used in the research to anyone who wants to check it.

2. Deploy over the clouds (literally!)
An interesting intervention has caused surprise among the audience of DockerCon 2016: a drone’s software was updated while it was still flying. Something you can’t believe. Bu with containers you can, and we have the proof. Thanks to Docker it was possible to update the drone’s software during the flight, by starting a container in parallel with the new program which, once updated, replaced the original container, after having received the transferring data. Therefore, with Docker you can manage remotely the application deployment but especially with just 50-200 ms of transfer waiting time from one container to another.

3. Education
Containers don’t need manual configurations, so you can save a lot of time in the working environment preparation. Often for students this is one of the most problematic stage. Using the container management system you stop losing your time and get a superior quality education.

4. Local stacks management
Docker is a valid tool also for local stacks management. The continuous software updates risk to create conflicts of version and incompatibilities hard to solve. Thanks to containers, this kind of issue is easily avoided. The applications/services can be written with any programming language without generating any system conflict. Indeed, containers are isolated and independent environments into which you can insert the application with all its dependencies, bootable on any machine that deploys Docker.

Do you want to know more about Doker and containers? Request a free consulting with our experts!

 
Facebook
Twitter
LinkedIn

Contact us

Fill out the form and one of our experts will contact you within 24 hours: we look forward to meeting you!

Categories
News

WHAT IS KUBERNETES AND WHY YOU SHOULD USE IT

WHAT IS KUBERNETES AND WHY YOU SHOULD USE IT

Kubernetes and containers

The last time we saw what is Docker and how it works. Today we are dedicated to Kubernetes. These tools are strictly connected and both belong to the container technology. Let’s see together what we are talking about.

Kubernetes is an open source tool to orchestrate and manage containers. It has been developed by the Google team and is one of the most used instrument for this purpose. Kubernetes allows you removing many of the manual processes involved in the deployment and scalability of containered applications and manage easily and efficiently clusters of hosts on which containers are executed.

Imagine using Docker and creating an infrastructure composed by several containers; once you reach a certain degree of complexity, Docker struggles to activate and deactivate the containers and to carry out all the other management operations, and is here that Kubernetes comes in. Indeed, this tool was born just to simplify the containers management and orchestrate quickly and efficiently the entire infrastructure. However, the use of Kubernetes is not only related to the amount of containers, but especially to the level of availability requested for the service. Kubernetes is ideal for any business which needs a High Availability solution and it ensures the service continuity with SLA close to 100%.

How it works?

First of all, let’s introduce some basic terms to understand the Kubernetes architecture.

Master: the machine which controls Kubernetes nodes. It’s the origin point of all processes.

Nodes: machines that execute the requested activities, controlled by the Kubernetes master.

Pod: a group composed by one or more containers distributed on a single node. All containers of a pod share some resources. Pods abstract networking and storage from the underlying container, allowing to move easily containers on clusters.

Kubernetes offers the possibility of deploying containers in a scalable way with the aim of manage workloads at best. It enables to create applications and services on more containers, program them and manage their scalability and integrity in the long term. The complexity of management deriving from a high amount of conainers is simplified by grouping containers in “pods”, which help to programme workloads and provide requested services, including storage and networking to containers. Kubernetes is also able to automatically balance the loads into the pods, facilitating a lot the whole infrastructure management. In addition, Kubernetes standard infrastructure is fully redundant and this reduces drastically the risk of downtime, while with the use of simpler container management tool like Docker, the availability is not ensured at so high leveles.

Thanks to the development of new technologies and to the progress made with the namespaces, in 2008 was born the project LinuX Container (LXC): the most complete solution of containers management in those years. Among the crucial components of the project there is the development of cgroups, created by Google in 2006, which allow controlling and confining the amount of resources used for a process or for a group of processes.

Resuming: why you should use Kubernetes?
1. If you want to manage your containered applications easily, quickly and efficiently
2. If you need a HA solution and you can’t suffer downtime in any case
3. if you have a complex infrastructured, composed by several containers
4. If you want to take advantage of many other benefits of Kubernetes that we’ll show you in the next post 🙂

Do you want to use Kubernetes but you don’t know how to start? Request a free consulting with our experts!

 
Facebook
Twitter
LinkedIn

Contact us

Fill out the form and one of our experts will contact you within 24 hours: we look forward to meeting you!

Categories
News

CONTAINERS VS VIRTUAL MACHINES

CONTAINER VS VIRTUAL MACHINE

What are the differences between containers and virtual machines?

In these days we talked about containers, the new technology of virtualization which is really taking off in the IT field, especially in the Cloud computing industry.

Many are wondering which are the differences between containers and virtual machines, the method of virtualization most widely used until now, or if they are really different and maybe containers are a “simple virtualization”. Today we’ll see the main differences between containers and virtual machines and we’ll clear up any confusion.

Containers can be considered as a subsidiary of virtualization, a sort of “new generation” which introduces significant innovations to the traditional technique. First of all, the containers approach not involves the hypervisor but there is a system that package the applications in containers. This system creates a level of abstraction between the containers and the host operating system and manages the activation and deactivation of containers. Another big difference is that virtualization enables to run simultaneously several operating systems in a single architecture, while containers share the same operating system kernel and isolate the application processes from the rest of the infrastructure.

Here’s the main differences between containers and virtual machines:

– Simplified deployment: the technology of containers makes it possible to simplify the deployment of any application because it is packaged in a single component distributable and configurable with a unique command line, without worrying about the configuration of the runtime environment

– Fast availability: by virtualizing the only operating system and the components needed to run the application, instead of the entire machine, startup times are drastically reduced compared to virtual machines ones

– Wide portability: containers can be easily and rapidly created and replicated in any environment. This is a great benefit for the lifecycle software perspective, because containers can be copied very quickly to create development, test and live environments and do not require the usual configuration.

– Granular control: containers can package an entire application or just a single component. In this way they allows developers to further divide computing resources in micro-services, ensuring a higher control on the application running and an increase of the whole infrastructure performance.

– Agility: a great strength of containers is that they are “lighter” than virtual machines because they don’t need starting their own operating system. Consequently, containers are faster to be activated or deactivated and they are the ideal solution for environments with a processing load which varies widely and in an unforeseeable way.

But as for everything there are strengths and weaknesses, also containers have their vulnerabilities. One of them is represented by management difficulties when there is a high number of containers. On the contrary, virtual machines are simpler to use especially because rarely they reach large quantities, as occurs easily using containers. Another weakness is just the sharing of the operating system kernel. In theory, one of the containers could compromise the stability of the kernel, by affecting badly the other ones.

In conclusion, we can say that they are not opposed solutions. According to the purpose, virtualization or container technology may be more suitable. For instance, the agility of containers clashes with the impossibility of having more operating systems in the same infrastructure. In the same way, with limited resources and capacity, a “light” solution like containers is more suitable and performing compared to an infrastructure composed by virtual machines.

Do you want to use containers but you don’t know how to start? Request a free consulting with our experts!

 
Facebook
Twitter
LinkedIn

Contact us

Fill out the form and one of our experts will contact you within 24 hours: we look forward to meeting you!

Categories
News

BENEFITS OF CONTAINERS

BENEFITS OF CONTAINERS

Why use containers?

In the last articles we have explained what containers are and we have seen a short history of this technology. Today we will focus on one of the most interesting aspect, seeing which are the advantages of using containers for companies.

In particular, we have identified 6 benefits of containers which make the infrastructure more flexible and efficient.

1. Isolation
A great benefit of containers technology is the isolation of resources: RAM, processes, devices and networking are virtualized at the Operating System level, and the applications are isolated from each other.

This means that you don’t have to worry about conflicts of dependencies or shared resources, because each application has defined limitations in the use of resources. In addition, thanks to isolation, the protection level is higher.

2. Increase of productivity
One of containers benefits, maybe the most obvious, is the chance to host a large amount of containers also on PC or laptop, having always a deployment and testing environment available for any application, operation that would be more complex with virtual machines. In particular, this kind of technology resulting in an increase of developers productivity, thanks to the removal of dependencies and conflicts among different services. Each container can host an application or a single part of it and, as we saw above, is isolated from other containers. In this way developers can forget about synchronization and dependencies for any service, as well as they are free to run the updates without worrying about possible problems among the components.

3. Easy deployment and shorter start-up times
Each container includes not only the application but also all the package useful to run it, by simplifying any deployment operation and facilitating the distribution on different operating systems with no further effort of configuration. Besides, virtualizing only the operating system, start-up times become much shorter.

4. Coherent environment
Thanks to the standardization approach, containers enable the portability of resources, by reducing issues about the displacement of applications through the cycle of development, testing and production. Containers can be deployed in an easy and secure way independently form the environment. It’ s not necessary to configure servers manually and new features can be released more easily.

5. Operating efficiency
With containers you can run more applications on the same instance and specify the right amount of resources which should be used, by ensuring their optimization. Containers are lighter than VM and make the system more agile, by increasing the operating efficiency, the development and the management of applications.

6. Version control
Container technology enables to manage the versions of application code and its dependencies. You can keep track of the container versions, analyzing the difference among them and eventually come back to the previous version.

Containers have multiple benefits: they are an advanced technological solution able to improve the application management and make the architectural system lighter and more performing. Many talk of them as an option to the more popular virtual machines, but what are the difference between VM and containers? We’ll find it out in the next post!

Do you want to use containers but you don’t know how to start? Request a free consulting with our experts!

 
Facebook
Twitter
LinkedIn

Contact us

Fill out the form and one of our experts will contact you within 24 hours: we look forward to meeting you!

Categories
News

CONTAINERS: FROM THE ORIGINS TO DOCKER

CONTAINERS: FROM THE ORIGINS TO DOCKER

How was the technology of containers born?

In the last post we talked about containers, a technology increasingly used in the IT and Cloud computing field, the origin of which is not so recent as you might think. Let’s see together a short history of containers: from the idea on which the technology is based to the latest solutions (Docker and similar systems).

The basic concept of containers was born back in the 1979 with chroot UNIX, a system able to provide isolated spaces for the processes in the storage in use. For this particular characteristic, Chroot can be considered as a sort of forerunner of the actual containers.

Only after twenty years comes the closest solution to the containers used nowadays: it is FreeBSD Jail, introduced in 2000. FreeBSD Jail allowed to distribute the system in more isolated sub-systems called jail. Compared to chroot, the additional feature of these environments was the possibility of isolate also file system, users and networking, making them more secure. But this solution have some weaknesses in terms of implementation and it was replaced soon by more efficient methods.

The following year, in 2001, was developed a solution similar to FreeBSD Jail called Linux VServer, capable to partition the computer resources in the so-called “security contexts”, into which there was a VPS system (Virtual Private Server). The VServer project, launched by Jacques Gèlinas, lays the foundations to the creation of several controlled user spaces in Linux and, during the years, with additional technologies and components, it will lead to the development of the current Linux containers.

Thanks to the development of new technologies and to the progress made with the namespaces, in 2008 was born the project LinuX Container (LXC): the most complete solution of containers management in those years. Among the crucial components of the project there is the development of cgroups, created by Google in 2006, which allow controlling and confining the amount of resources used for a process or for a group of processes.

2013 is the turning point: it comes Docker, the Linux container system most used in the IT field. Docker is an open source project developed by the company Dotcloud (afterwards renamed Docker) on the basis of LXC, which through several advances becomes what is Docker technology now. Since 2014 Docker doesn’t use LXC as default execution environment anymore, replaced by its own Libcontainer library, based on the programming language GO. Docker is a complex but very intuitive ecosystem for the deployment and the management of containers, rich in functionalities including image system, local and global registries for containers and a command line interface.

We talked specifically about Docker because is the main used technology in the world, but obviously is not the only one container management system: there are different technological solutions which have the same function like Kubernetes, Google Container Engine and many more.

Do you want to know more about container technology? Request a free consulting with our experts!

 
Facebook
Twitter
LinkedIn

Contact us

Fill out the form and one of our experts will contact you within 24 hours: we look forward to meeting you!

Richiedi la tua prova gratuita

Ehi! Stai già andando via?

Iscriviti alla nostra newsletter per restare aggiornato sulle novità dell’universo Criticalcase