Categories
News

CLOUD COST MANAGEMENT: Have you ever thought about optimizing the costs of your Cloud?

CLOUD COST MANAGEMENT: Have you ever thought about optimizing the costs of your Cloud?

Cost management, in addition to being a great challenge for companies that use Public Cloud services, is also a great opportunity to promote efficient IT consumption.

With “Cloud Cost Management” we intend to identify, manage, and monitor the causes of costs incurred on Cloud or MultiCloud platforms with some very specific purposes. In this article we want to explain and deepen the approach that Criticalcase adopts and adapts to its customers.

To begin with, it should be noted that managing the costs of the Cloud is not just a question of Operations. Many companies delegate the management and optimization of Cloud costs to the IT Operations body, but the “problem” must be addressed on several corporate levels (Finance, Procurement, Program Management, IT Strategy, etc.) and during the different phases of the life cycle of projects.

 

The following image shows a typical segmentation of “Cloud Cost Management”, this model is effective but incomplete, as it is developed solely at the IT Operation level:

What are the main aspects that drive Cloud costs out of control?

These are some of the main causes that lead to a drift in Cloud costs:

Every Cloud Service Provider has a different billing method and the way in which they apply costs. The bill may include thousands of options and combinations that are difficult to understand also because they can vary during the life cycle of the project.

This complexity increases when a customer uses multiple Cloud Providers as he will have to manage different payment and billing methods.

The invoices are made up of hundreds of items thus making it difficult to reconstruct and allocate costs.

Self-provisioning causes out-of-control growth with unexpected costs. An easy access to the point-and-click web console without constraints can lead to uncontrolled increase in resources.

Cloud providers annually announce the addition of new services and components, new features, and new pricing models, making it difficult to control these changes.

The same application can be developed using many types of different architectures and components which can therefore involve different costs. This implies that companies have more difficulty in calculating and identifying the most convenient alternative to satisfy the customer.

The main Cloud platforms such as AWS, Microsoft Azure, Google Cloud Platform (GCP) have different billing, service, API and management systems and therefore there is no standardization, and this creates difficulties when using multiple platforms.

 

The task of Criticalcase professionals responsible for IT operations and cloud management is to assure:

Cost Optimization Roadmap

The Criticalcase methodology (which takes its cue from the Gartner framework) provides a framework for managing the costs of the Public Cloud.

This methodology not only provides information on operational aspects such as reducing disk space, turning off machines if not used, etc., but also providing information on architecture, application development, DevOps, and governance.

It is a recursive and structured approach that aims to ensure a balance between costs and the level of service required.

In this phase, Criticalcase defines the objectives, direction, and business requirements with the customer, taking into consideration the available budget.

A census of the applications used in the company will be made to understand their value, impact, their complexity, and security constraints.

Cost planning is the key to establishing cloud spend expectations. Ignoring this component of the roadmap without budgeting for applications would cause concern as companies would struggle to hold their customers responsible for their expenses.

 

ASSESSMENT AND ANALYSIS

At this point we analyze the architecture to understand how it was deployed and developed. Once the information is gathered, Criticalcase begins to monitor and measure the workload to understand if there are any oversized or undersized machines and make an analysis of the costs invested.

The technical requirements are also analyzed and will be compared and correlated with the other solutions.

 

COST ANALYSIS

In the cost analysis phase, Criticalcase will focus on the census of the services used by the customer by implementing a labeling strategy. These labels, or more simply Tags, implement metadata that apply to all the elements of the hierarchy of a native provider and are displayed in the supplier’s invoice next to each item to be used to group the various costs.

The cost monitoring phase is important to gain the visibility on cloud spending which is essential to verify the correctness of expectations and detect any anomalies.

 

REDUCTION

At this point, the cost reduction activity is started by optimizing and downsizing the machines, starting an on-and-off plans, disposing of unused resources and so on.

This step is the quickest way to immediately reduce costs as these practices do not require architectural changes and are easy to apply. Ignoring this component of the framework will increase costs for cloud services and will not allow you to take advantage of the elasticity of cloud computing.

 

OPTIMIZATION

Cloud spend optimization goes beyond the cost reduction techniques mentioned in the previous step. On the contrary, strategic optimization techniques often require architectural modifications of applications to reduce the need for resources.

Although these optimizations may take longer to implement than the previous techniques, they have other advantages such as greater resilience and scalability. By ignoring this step of the framework, savings opportunities cannot be fully maximized, leaving behind the economic benefits of adopting cloud native principles.

Total Cost of Ownership, Stakeholder and Governance Model

In this chart we can see the Total Cost of Ownership (TCO)

As can be seen from the image, the curve of TCO Oprimization after a while tends to flatten, this means that after a start in which cost optimization is very pronounced, it will tend to stabilize.

It is recommended that each company evaluates and studies its curve well in order to optimize its cost / benefit ratio regarding its applications, constraints, objectives and its strategy.

The basic rules to follow are the following:

Designing architectures and solutions based on cost optimization principles. Efficient use of the IaaS cloud, PaaS, sizing, and optimization of the service.

Follow the cost reduction policies continuously, inform customers about optimization opportunities, and establish reports and dashboards to create cost awareness throughout the company.

Define the requirements to be provided by the application in terms of performance, availability, frequency of updates or intended use.

Provide governance rules for policymaking regarding budget approval and cost allocation.

What analysis tools should be used to optimize costs?

Our suggestion is to use the native tools of the Cloud platforms. These tools are highly integrated with the cloud platform and provide high functionality.

Native tools are available to all customers, some of these tools are free while others must be paid with a consumption-based model, but most importantly, cloud service providers continue to invest in their management toolset, with frequent updates of new features and services.

 

What are you waiting for? Optimize the costs of your cloud, all you have to do is contact us 😉 !

Categories
News

MULTI-CLOUD: HOW TO MANAGE CLOUD INFRASTRUCTURES

MULTI-CLOUD: HOW TO MANAGE CLOUD INFRASTRUCTURES

The cloud has reshaped the way we do business. Thanks to this technology, companies have had the opportunity to upgrade system management as well as their overall services. Unfortunately, many companies still don’t use this technology to rise to their full potential.

Decision-makers in many organizations are helpless in front of too many technical details shared online, which, most of the time, remain too hard to understand by the non-tech savvy. That’s why we decided to write a practical guide to multi-cloud, in which we focus on the services offered by cloud technology.

An increased number of companies have decided to replace the limited possibilities of a single cloud system with multi-cloud, as it’s faster and more effective.

multi-cloud

As a result, they registered a 75% growth from the previous year. However, experts recommend a small-step approach to multi-cloud, which allows companies to learn gradually about the various functions of this technology.

 

What is multi-cloud?

The multi-cloud system operates on more public clouds, sometimes offered by multiple third-party providers. The main advantage of this environment is its flexibility, as it can adapt to carry out different tasks in total autonomy.

It’s an ambitious goal, as the system is aiming to connect different types of software or apps (for example, using advanced API or RESTful). At the same time, it should reduce or abolish the so-called vendor lock-in which is the relationship of dependency established between the provider (that tends to tie customers to specific services) and the beneficiaries of the service.

 

The ideal multi-cloud service provider

Unfortunately, there’s no one-size-fits-all provider of multi-cloud services. But, various general criteria can guide you to the perfect choice. In the era of big data and the Internet of things (IoT), companies are pressed by the need to improve performances on a medium and large scale. This involves the continuous design and re-definition of the architectures that guide various systems. Under this light, the use of the multi-cloud becomes necessary to streamline operations and make them smart.

Multi-cloud service providers must be able to offer a high performing and adequate network infrastructure, based on the Fault-Tolerant paradigm. Therefore, it can perform disaster recovery and fast recovery of backup, as well as ensure a low probability of breakdowns or inefficiencies during use.

Before entrusting yourself to the first provider you find, you should check that it meets these requirements, as well as whether it has qualified technicians available to solve any potential problems. Otherwise, you risk finding yourself handling an ineffective cloud, which is blocked and difficult to manage. You can learn more about the advantages of an effective multi-cloud service by checking out our multi-cloud offer.

 

The benefits of multi-cloud systems

Not only does the multi-cloud give you the possibility of customizing services, but it also tends to enhance workload distribution on multiple nodes of the network, minimizing the risks of congested nodes. As you distribute the work differently, you speed up packet delivery on the network and improve routing management. These features open up scenarios that were impossible to imagine a few years ago.

The multi-cloud has become fundamental for fast and hard-to-predict technological development. As a consequence, companies need to develop an ability to adapt quickly to new technologies, too, to stay competitive and meet the needs of potential customers.

 
 
Facebook
Twitter
LinkedIn

Contact us

Fill out the form and one of our experts will contact you within 24 hours: we look forward to meeting you!

Categories
News

WEBINAR: MULTICLOUD IAC ON TENCENT CHINA, ISSUES AND BEST PRACTICE

WEBINAR: MULTICLOUD IAC ON TENCENT CHINA, ISSUES AND BEST PRACTICE

Criticalcase and Polytechnic University of Turin present Webinar: Multicloud IaC on Tencent China, Issues and best practice

22nd of February at 17:00 CET

Recent studies show China as the new digital frontier: an ever-growing market for the sale of products and services by the western world, but also a place in which are emerging new businesses, technologies and consumption models, bound to become popular at international level.

In addition, in China we can find three of the Internet giants: Alibaba, Baidu and Tencent.

China excels in the industry of e-commerce with an incidence of 35% on the global market.Chinese users prefer make purchases by mobile devices and look for services and experiences more than for simple products.

Currently considered the largest market for Western countries, China will be a space of continuous and significant economic growth and technological innovation, and it will have an increasingly strategic role in the digital world and in the development of new business models.

Based on a Use Case for a big player on fashion industry, the webinar will focus on:

 

1) Cloud environment with global diffusion (China related issues). DEMO (Tencent Cloud)

2) How to structure and launch an IaC terraform project. DEMO

3) How to exploit the multicloud (lamba and S3). DEMO

4) Secure the access: Bastion host. DEMO

5) How could this kind of project evolve by integrating into a DevOps.

 

Webinar is open to anyone interested in this topic, we kindly invite you to register and book your place!

Facebook
Twitter
LinkedIn

Contact us

Fill out the form and one of our experts will contact you within 24 hours: we look forward to meeting you!

Categories
News

SASE ARCHITECTURE: 6 USE CASES

SASE ARCHITECTURE: 6 USE CASES

What is SASE Architecture & Use Cases

Companies are digitizing, which means the time has come to think about managing optimized access to data and applications, both on-premise and in the cloud, and the increasingly mobile global workforce.

Criticalcase has chosen to partner with Cato Networks because it is the first implementation of Gartner’s Secure Access Service Edge (SASE) framework that has identified in a global and cloud-native architecture the way to provide secure and optimized access to all users and applications.

The Cato solution enables companies to move from traditional networks such as MPLS to global, secure, agile and affordable modern networks.

Cato Cloud connects all corporate network resources, such as branch offices, mobile workforce, on-premise datacenter and cloud services, providing a global, secure and controlled SD-WAN service. With all the traffic WAN and Internet consolidated in the cloud, Cato offers a suite of security services to protect all the traffic.

01

Migration from MPLS networks to SD-WAN

01
  • Migration from MPLS networks to SD-WAN
  • MPLS networks are expensive, inflexible and limited in capacity. Using Cato Edge SD-WAN, businesses increase usable capacity and improve resiliency at a lower cost per megabit. Companies with a global footprint are leveraging Cato’s private global network backbone to replace the global MPLS network and the unpredictable Internet. Migrating to SD-WAN allows you to optimize performance and maximize the throughput of on-premise and cloud resources.

02

Optimized global connectivity

  • We offer a private global backbone with an integrated WAN network to ensure a predictable, SLA-guaranteed, high-performance network experience everywhere. Using Cato we can offer an excellent user experience for accessing on-premise and cloud applications.

03

Secure Internet access at branches

  • We provide a complete network security stack built into Cato Cloud. By connecting all offices to the private global network backbone through the Cato Edge SD-WAN platform, all traffic, both Internet and WAN, is fully protected by Cato Security as a Service, thus eliminating the cost and complexity of specific solutions security, appliance or cloud services.

04

Cloud acceleration and control

  • We accelerate access to the cloud by routing all cloud traffic to the Cato PoP closest to the cloud destination. Since Cato’s PoPs share the footprint of the data centers of major cloud providers, the latency between Cato and these providers is essentially zero. Optimizing cloud access only requires a single application-level rule that determines where cloud application traffic should leave the Cato Cloud. Enough of the hassle and cost of deploying cloud appliances or creating regional communication hubs in an effort to extend the SD-WAN to the cloud.

05

Mobile network security and optimization

  • Cato’s global network and security capabilities extend to a single mobile user’s laptop, smartphone or tablet. Using a Cato Client or clientless browser access, users dynamically connect to the nearest Cato PoP and their traffic is optimally routed over Cato’s private global network backbone to on-premise or cloud applications. Cato’s Security as a Service protects mobile users from threats anywhere in the world and enforces access control to applications.

06

Work from home

  • Cato supports work from home for all employees, always. Companies quickly connect their on-premise and cloud data centers to the Cato Cloud and enable self-service provisioning of Cato Clients to all users who require access for work from home or remotely. Unlike traditional VPN and SDP products that are not adaptable to support the entire enterprise, Cato’s global and cloud-scale platform is designed to optimize traffic to all applications with a private global backbone, continuously inspect traffic for threats and perform access control with Cato’s security stack.
Facebook
Twitter
LinkedIn
Categories
News

CLOUDCONF 2020 – EUROPE’S LARGEST CLOUD CONFERENCE

Cloudconf 2020 - Thursday 5 November 2020 - Live Streaming

CloudConfone of the most expected events in Europe on Cloud Computing returns once again for an online edition that will be held on Thursday 5 November. 

This year at Cloudconf are expected thousands of participants and over 30 talks and keynotes on topics such as scalability, IoT, Docker, Kubernetes, Machine Learning, Blockchain, MicroservicesServerless, Performance, Cloud Development and much more. 

Cloudconf will be a real streaming conference, participants will attend quality talks and technical keynotes with high-profile speakersgroup chats and mentoriship, surveys and also many interesting prizesSeveral rooms will host discussion groups on technology topicsallowing participants to interactdiscuss and exchange viewsas well as post questions to speakers and get expert opinion. 

 Sponsors of the event are the leading brands in the world of cloud computing at an international level and of course Criticalcase will be there 

As every year, Criticalcase is  participating as an event sponsor and we want to welcome everybody to visit our virtual stand and get to know us better, our staff will be available for any information.

Our speaker at Cloudconf will be Tito Petronio, Digital Solutions Director at Criticalcase, he will talk about the deployment of the global project in the cloud. Tito will guide you in detail through all the issues related to performance, safety, regulations and best practices. And of course, as always, we’ll share helpful tips & tricks for a successful deployment.

Register for the Cloudconf event directly online https://2020.cloudconf.it/ to be able to participate in the live conference on November 5 and follow Criticalcase talk.   

Facebook
Twitter
LinkedIn

Contact us

Fill out the form and one of our experts will contact you within 24 hours: we look forward to meeting you!

Categories
News

MPLS, SD-WAN AND SASE: WHAT WILL BE YOUR NEXT WAN?

MPLS, SD-WAN AND SASE: WHAT WILL BE YOUR NEXT WAN?

MPLS, SD-WAN and SASE, the future of WAN

WAN is the backbone of the business. It ties together remote locations, headquarters, and data centers into an integrated network. The role of the WAN has evolved significantly in the past years: beyond physical locations, we now need to provide optimized and secure access to cloud-based resources for a global and mobile workforce.

The existing WAN optimization and security solutions were designed for physical locations and point-to-point architectures, and are no longer able to support this transformation. 

 

First Generation: Legacy WAN Connectivity

Currently, there are two WAN connectivity options, which balance cost, availability and latency: MPLS and Internet. 

MPLS

With MPLS, a telecommunication provider provisions two or more business locations with a managed connection and routes traffic between these locations over their private backbone. In theory, since the traffic does not traverse the Internet, encryption is optional.  

Because the connection is managed by the telco, end to end, it can commit to availability and latency SLAs. This commitment is expensive and is priced by bandwidth. Enterprises choose MPLS if they need to support applications with stringent up-time requirements and minimal quality of service (such as Voice over IP, VoIP). 

To maximize the usage of MPLS links, WAN optimization equipment is deployed at each end of the line, to prioritize and reduce different types of application traffic. The effectiveness of such optimizations is protocol and application specific (for example, compressed streams benefit less from WAN optimization)  

Advantages of MPLS: Low Latency and High availability 

Disadvantages: high price 

Internet

Internet connections procured from the ISP, typically offer nearly unlimited last mile capacity for a low monthly price. An unmanaged Internet connection doesn’t have the high availability and low-latency benefits of MPLS but it is inexpensive and quick to deploy.  

IT establishes an encrypted VPN tunnel between the branch office firewall and the headquarters/data center firewall. The connection itself is going through the Internet, with no guarantee of service levels because it is not possible to control the number of carriers or the number of hops a packet has to cross. This can cause unpredictable application behavior due to increased latency and packet loss. 

Advantages of Internet: Low price 

Disadvantages: Unknown latency and low availability 

Second generation: Appliance-based SD-WAN

The cost/performance trade off between Internet and MPLS, gave rise to SD-WAN. 

SD-WAN is using both MPLS and Internet links to handle WAN traffic. Latency sensitive apps are using the MPLS links, while the rest of the traffic is using the Internet link. The challenge customers face is to dynamically assign application traffic to the appropriate link.  

SD-WAN solutions offer the management capabilities to direct the relevant traffic according to its required class of service, offloading MPLS links and delaying the need to upgrade capacity.  

SD-WAN solutions, however, are limited in a few key aspects: 

  • Footprintsimilar to WAN optimization equipment, SD-WAN solutions must have a box deployed at each side of the link 
  • Connectivity: SD-WAN can’t replace the MPLS link because its Internet “leg” is exposed to the unpredictable nature of unmanaged Internet connection (namely, its unpredictable latency, packet drops and availability) 
  • Deployment: SD-WAN, like the other WAN connectivity options, is agnostic to the increased role of the Internet, cloud and mobility within the enterprise network. It focuses, for the most part on optimizing the legacy, physical WAN. 
 

 

Third Generation: Secure Access Service EDGE (SASE)

With the rapid migration to cloud applications (Office 365, Slaesforce), cloud infrastructure (AWS, Azure, Criticalcase cloud) and a mobile workforce, the classic WAN architecture is severely challenged.  

SASE (Secure Access Service EDGE) is the convergence of wide area networking, or WAN, and network security services like CASB, FWaaS and Zero Trust, into a single, cloud-delivered service mode

According to Gartner, “SASE capabilities are delivered as a service based upon the identity of the entity, real-time context, enterprise security/compliance policies and continuous assessment of risk/trust throughout the sessions. Identities of entities can be associated with people, groups of people (branch offices), devices, applications, services, IoT systems or edge computing locations.” 

It is no longer sufficient to think in terms of physical locations being the heart of the business, and here is why: 

  • Limited end-to-end link control for the cloud 

With public cloud applications, organizations can’t rely on optimizations that require a box both end of each link. In addition, cloud infrastructure (servers and storage) introduces a new production environment that has its own connectivity and security requirements. Existing WAN and security solutions don’t naturally extend to cloud-based environments. 

  • Limited service and control to mobile users 

Securely accessing corporate resources requires, mobile users to connect to a branch or HQ firewall VPN which could be very far from their location. This causes user experience issues, and encourages compliance violations (for example, direct access to cloud services that bypasses corporate security policy). Ultimately, the mobile workforce is not effectively covered by the WAN.  

SASE is aiming to address the challenges of traditional WAN. It is based on the following principles:  

– The perimeter moves to the Cloud: The notorious dissolving perimeter is re-established in the cloud. The cloud delivers a managed WAS backbone with reduced latency and optimal routing. This ensures the required quality of service for both internal and cloud-based applications. 

– The network “democratic” and all-inclusive: all network elements plug into the cloud WAN with secure tunnels including physical locations, cloud resources and mobile users. This ensures all business elements are integral part of the network instead of being bolted on top of a legacy architecture 

– Security is integrated into the network: beyond securing the backbone itself, it is possible to directly secure all traffic (WAN and Internet) that crosses the perimeter – without deploying distributed firewall. 

Download the paper to learn network transformation strategies and how to migrate from MPLS to modern SASE solutions. 

Download free E-book

How to migrate from MPLS to SD-WAN

 By adopting SASE companies gain numerous benefits in terms of agility, collaboration, efficiency and cost reduction. 

Criticalcase has formed a strategic partnership with Cato Networks, the world first and only SASE platform. At your disposal we always have numerous engineers available to answer any of your question or add the information you might have missed. Fill out the form below to get in touch

Facebook
Twitter
LinkedIn

Contact us

Fill out the form and one of our experts will contact you within 24 hours: we look forward to meeting you!

Categories
News

AN ONGOING “MEOW” ATTACK DELETS THOUSANDS OF DATABASES

AN ONGOING “MEOW” ATTACK DELETS THOUSANDS OF DATABASES

Thousands of unsecured internet-facing databases have been damaged and destroyed by the wave of attacks called “Meow”After the attack is over, it leaves no explanation and no notes on what and why has happened, except for the only one word: Meow. 

Meow-attacks started at the end of July 2020 and are still an ongoing issue, till now nearly 4000 DBs have been completely deleted, the majority are MongoDB and ElasticSearch but are not the only ones, also  Cassandra, CouchDB, Redis, Hadoop, Jenkins, and Apache ZooKeeper have suffered Meow attacks. 

Meow is an automated attack, it consists of a bot script that attacks a site by probing for known vulnerabilities such as unsecured ports and vulnerable files. Automated Meow attacks are targeting unsecured installations, for example, the ones without SSL encrypted communication, or the installations that are not protected by a firewall/WAF and are exposed to the public. 

It is not quite clear what is the source and the motivation for Meow-hackers since such attacks do not have “global” menaces and do not contain any ransom threats, therefore malicious actors most probably are just doing it for fun, since hacking is becoming more accessible and easy year after year.   

Bob Diachenkoresearcher and cybersecurity expert, was the first one to notice a strange wave of attacks that were taking advantage of the vulnerabilities the systems haveOn his Twitter account, the researcher has presumed that most probably the hackers not only want to have fun but also want to teach a lesson and to make DB admins more sensible to IT security topics and pay more attention to securing the data.

However, even if the intentions may seem noble, the hackers have created serious damage to the companies. A huge Indian travel and online booking company have lost personal data of over 700.000 users, while a famous cosmetics brand Yves Rocher has lost millions of customer data.  

How can you protect your data from Meow-like breaches?

1) Protect against script attacks – web sites rely heavily on scripts to run services and access data and the hackers always find a way to exploit those scripts to steal sensitive customer information. Malicious code can come from many sources, a solution that can detect script behavior will provide the most effective protection from these types of attacks. 

Criticalcase in collaboration with Akamai Technologies implements Page Integrity Manager that takes a detection-first approach so that you can quickly mitigate compromised scripts and update policy controls to stop zero-day attacks and recurring attacks. 

2) Use Multi-factor authentication (MFA) – today relying on just a username and password is no longer enough. The best solution is the one that can easily turn on MFA for any application with only one click — no development, testing, or maintenance required 

3) Assume all the data in your database is sensitive data and treat it accordingly. You need to know exactly where the data is and manage its security in an effective and easy way having control over the whole life cycle of the data. 

4) Make sure key people in the company know who is responsible for database security.

5) Secure your data and apps with a WAF (Web Application Firewall), it inspects the traffic before it reaches your application and protects your server by filtering out threats that could damage your site or compromise data. 

A WAF is an advanced solution that can protect you not only from any data BREACH but as well from SQL injection, Malicious file execution, Cross-site scripting, and more. A cloud-based WAF can scale to protect against the largest DoS and DDoS attacksCriticalcase together with Akamai Technologies implements security solutions to eliminate any risks of downtime, data theft, and security breaches for its clients.  

6) Work with a trusted technology partner that can provide you with a tailor-made and fully managed security solution.  

 

Here you can read more about the advantages of cyber security solutions.

 
Facebook
Twitter
LinkedIn

Contact us

Fill out the form and one of our experts will contact you within 24 hours: we look forward to meeting you!

Categories
News

CALCULATING THE TCO: CLOUD VS ON PREMISE INFRASTRUCTURE

CALCULATING THE TCO: CLOUD VS ON PREMISE INFRASTRUCTURE

TCO (Total Cost Of Ownership) is a financial estimate approved by Gartner in 1987 to help companies calculate precisely economic impact during the whole life cycle of IT projects. 

Knowing the total costs allows companies to evaluate solutions and products more consciously than simply evaluating purchase price.

Migration to the cloud is an unstoppable trend, companies evaluate the advantages of cost-saving that a pay-as-you-go system offers when you need to replace your equipment on-premises, lower costs or switch to an agile model of business by developing cloud native applications.

Despite the growing trend, we still see how the decision-making process slows down due to complex cost calculations.

THE TCO IS WORTH MORE THAN THE PURCHASE PRICE

In front of a new IT project we ask ourselves the eternal question: should we buy the physical equipment or go in the Cloud direction.

To understand the long term economic impact that owning an infrastructure vs cloud services will have, it is necessary to correctly calculate the TCO.

 
 

Calculating the TCO for the on premise infrastructure, what should you consider?

Purchase of hardware and software: 

  • Costs for the purchase of servers 
  • Storage costs (SAN) 
  • Security devices (Firewall, Crypto Gateway, etc) 
  • Network  
  • Design and implementation of a backup plan
  • IP 
  • Software licenses (Office package, Management, OS, DB, Antivirus and so on) 
  • Colocation in one or more datacenters 
  • If a disaster recovery plan is required, it is necessary to use 2 server farms at a distance of at least 50 km between them

Associated costs: 

  • Cost and time related to infrastructure design
  • Costs of updates and improvements during the use
  • Energy consumption and constant temperature monitoring
  • Maintenance 
  • Technical support 
  • Staff training 
  • End of life disposal 
  • Inefficiency losses related to the over-dimensioning of the physical environment * 

*during the design phase the resources are estimated roughly, therefore the proprietary hardware is used at approximately 60% of its capacity after being put into production

A quick look at the costs shows that some are one-off costs, such as the purchase of physical equipment, others are recurring costs (maintenance staff, colocation fee, energy consumption) that will occur throughout the whole period of use of the environment.

We recommend calculating the TCO for a 3-5 year period because the lifespan of physical equipment is about to end after 5 years of use.

After 3-5 years it will be necessary to evaluate the state of the hardware and place a budget for the entire purchase cost again to design a modern and performing infrastructure.

 

Cloud Infrastructure Costs

Cloud costs:

  • Absence of Capital costs (CAPEX), it is replaced by a pay-as-you-go approach
  • No maintenance costs, technical management is done by the Cloud service provider
  • No risks associated with maintaining the infrastructure

Cloud convenience:  

  • Fast time-to-market: the provider will prepare the necessary environment in a very short time
  • Ability to solve multiple tasks in a single platform
  • Immediate scalability of resources as needed
  • We only use Enterprise level hardware
  • 24/7 assistance in Italian, English, German

Cloud performance and security:

  • All the data are in one of our datacenters tier III and IV 
  • SLA guaranteed at 99.9998% 
  • Business continuity at zero cost 
  • Guaranteed security, protection against attacks
  • High services performance, low latencies 

To summarize, we should remember that the TCO consists of both design and implementation costs (CAPEX) as well as maintenance, electricity, and other day-to-day costs. By migrating to the Cloud there is a significant reduction of the total costs (TCO) by 20% based on 3 year period calculus and this economic convenience will become greater every year.

Criticalcase has been operating since 1999 as a High Availability Cloud provider, specializing in the provisioning of tailor-made full managed solutions. Additionally, Criticalcase is Autonomous System (AS48815), therefor it guarantees total flexibility of management and assignment of IP classes. Here you can read the detailed description of our Cloud architecture.

Facebook
Twitter
LinkedIn

Contact us

Fill out the form and one of our experts will contact you within 24 hours: we look forward to meeting you!

Categories
News

WHAT IS RAID TECHNOLOGY – STORAGE MIRRORING AND HOW DOES IT WORK

WHAT IS RAID TECHNOLOGY – STORAGE MIRRORING AND HOW DOES IT WORK

RAID is a data storage virtualization technology that combines multiple physical disk drive components into one or more logical units for the purposes of data redundancy, performance improvement, or both. RAID stands for stands for Redundant Array of Inexpensive Disks also known as Disk Mirroring technology.

 

RAID/Mirrored Architecture in Criticalcase

We’ve chosen Mirrored architecture for our storage because it provides a symmetrical and an exact copy of the data in two different and separate disk piles. The biggest advantages of this type of architecture consist in providing at the same time a great resilience and double the performance.

 

Mirrored Architecture: Resilience

All the disks in the piles (that are composed of the disks themselves and controllers) are organized in RAID systems, which makes it very difficult to compromise any pile, so even in a case of a breakdown of any pile, the service will remain up&running with no data loss because it’s perfectly and entirely copied in the second pile.

iScsi channels are managed by independent and redundant switches in our datacenters that assure service continuity even in case of a failure of one of them. Criticalcase provide automatic snapshots of all the LUNs present in the Datacore storage with 7 days retention*

*Snapshots do not replace a backup plan, their integrity depends on the integrity of the LUN they are originated on

 

Mirrored Architecture: Performance

The two piles that share the same copies of the data provide the ser

Both piles are working in active mode, it means that the two piles that share the same copy of data are both assuring service in the same moment. Given the same amount of storage the supply of IO is double. Additionally it is important to specify the controllers are equipped with a great quantity of RAM (minimum 256 GB) that is used as read and write cache allowing the highest possible performance in case of a cache hit. iScsi and mirror channels can guarantee at leas 100Gbit of bandwidth for each pile.

 

Mirrored Architecture: Scheme

Tiering

Tiering allows us to store data on different disks and it is an intelligent way to lower the costs without having to give up on the performance. Datacore allows to have an automatic tiering system and it can dynamically move the data in blocks based on the access frequency. Data, that is required more often will be stored automatically on the SSD disks, while rarely requested data will be stored on less performant disks. The algorithm is adaptive.

 

Storage types in Criticalcase Datacenters

Criticalcase infrastructure is based on 7 European datacenters, Tier III and IV, distributed in Italy and in Europe, all completely redundant. In Data Centers each rack has a dual power supply, a backup system realized by UPS and diesel generators to ensure the continuity of the service at 99.99%.

Criticalcase offers 4 types of Enterprise Storage, all based on Mirrored (RAID) architecture.

The storages are differentiated based on their performance:

  • Iron: capacity storage, replaced by SAS disks – advised when the speed is not essential
  • Silver: balanced storage without SSD technology. We use 10 RPM capacitive SAS with automatic tiering
  • Gold: balanced storage with SSD technology. We use SSD datacenter level and capacitive disks with automatic tiering (50% SSD and 50% capacitive disks)
  • Platinum: Full SSD storage with SSD datacenter level
Facebook
Twitter
LinkedIn

Contact us

Fill out the form and one of our experts will contact you within 24 hours: we look forward to meeting you!

Categories
News

DRIVING DIGITAL TRANSFORMATION WITH CLOUD NATIVE SD-WAN

DRIVING DIGITAL TRANSFORMATION WITH CLOUD NATIVE SD-WAN

Companies all around the world are becoming progressively digital.  The rapid Digital Transformation is driven by increased home working requests, BYOD (Bring Your Own Device) initiativesIoT and mobility projectsEnterprises have a never seen before need to manage company data in a secure and fast way, delivering the data all around the globe. 

Considering strictly the IT viewpointdigital transformation depends on optimized access to the company’s applications and data by the users, that are distributed all around the globe and are today, more than everfirst of all mobile users 

Cloud Native SD-WAN

When we talk about Digital Businesswe talk about a specific Cloud first focus companies adopt in order to gain agility and speed, which is rarely compatible with unflexible and rigid approaches like MPLS and legacy WAN solutions that are getting abandoned by almost every company because of the lack of readiness and very high costs.  

The consumption of resources and applications is continuously growing in branch offices and it is one of the reasons why centralized traffic management, from headquarter datacenter, is resulting in slow services and bad performance. 

Why should you abandon the traditional MPLS?

  • MPLS network is expensive and has high cost per bitit is rigid and stiff to modify and most importantly it was not perceived for the modern Cloud environments 
  • The “trombone” effect: fast and secure Internet access is essential for any business, but MPLS, on the contrary, centralizes the backhaul traffic through only one main access with a big performance impact 
  • MPLS is limited to physical locationstherefore mobile users and cloud resources are not getting priority and are not of importance 
  • Long provisioning time (weeks or even months 
  •  

Replacement of MPLS and Legacy WAN architectures is an opportunity to connect all the remote offices faster and cheaper and assure much better application performance

Implementation of modern networking solutions is a sole way to assure a rapid and frictionless digital transformation and connect any user and any resource from any part of the world.  

 
 

Cloud Native SD-WAN

What is the best solution for your network transformation needs

Cloud Native SD-WAN is a software based approach to the enterprise network. Adoption of an SD-WAN allows to significantly lower the costs that depend on MPLS transfer, 4G/5G, LTE, and most importantly it improves application performance and increases the agility 

Enterprise architects and network experts are able to use efficiently the bandwidth once the SD-WAN has been implemented and to assure high performance for all the applicationsincluding the missioncritical ones, without worrying about IT security. 

Download free E-book How to migrate from MPLS to SD-WAN

SD-WAN solutions are very requested because of the numerous advantages they bring, especially when Cloud-based.

Critical case has chosen Cloud Native SD-WAN solutions by CatoNetworksto help enterprises with network transformation projects. Criticalcase has the strong engineering expertise to assure the maximum performance, lower the costs and management complexity of an enterprise network.

Facebook
Twitter
LinkedIn

Contact us

Fill out the form and one of our experts will contact you within 24 hours: we look forward to meeting you!

Richiedi la tua prova gratuita

Ehi! Stai già andando via?

Iscriviti alla nostra newsletter per restare aggiornato sulle novità dell’universo Criticalcase