Skip to main content

Virtualization vs. Cloud Computing

There is a tendency to confuse these both.  Many perceive that having a virtualized infrastructure implies that they are using cloud computing.  But this is not true.  The value benefits offered by these two are different.

Virtualization is a technique to logically create multiple hardware platforms  out of a physical hardware system.  What it implies is that,  using Virtualization techniques one is able to create multiple logical hardware devices with allocated processing, memory, storage and network resources out of a single physical device.

Technically speaking, virtualization uses a software called Hypervisor which acts as the layer between the hardware drivers in case of bare metal hypervisors (Example: VMWare) or over a host operating system (Example: Microsoft Hyper V)  wand mediates between the multiple virtualized hard ware systems created on the physical device and the drivers for the physical components.

Cloud computing is more of a business model to provide computing as a service with its primary benefit being "pay as you use".  Easiest way to visualize this is as getting computing resources like, processing unit, memory and disk as a service over the internet.  It leverages the growing reliability of the internet, both in terms of availability and speed to provide the requested resources.

For all practical purposes, cloud computing would leverage virtualization to rapidly allocate resources when requested.  The best way to put it would be is to say that Cloud computing will use Virtualization to ensure seamless and agile management of the services it offers.

Comments

  1. Very good information. we need learn from real time examples and for this we choose good training institute, who were interested to know about cloud computing which is quite interesting. We need a good training institute for my learning .. so people making use of the free demo classes.
    Many training institute provides free demo classes. One of the best training institute in Bangalore is Apponix Technologies.
    https://www.apponix.com/AWS-Essentials-Solutions-Architect/AWS-solution-arcchitect-training-in-bangalore.html

    ReplyDelete
  2. The Star Sydney Casino Resort (JT) - JTG Hub
    Book your 공주 출장안마 stay at The Star Sydney Casino Resort (JT). Enjoy 24/7 의정부 출장마사지 friendly 밀양 출장안마 customer service and online 경산 출장마사지 booking. Book 광양 출장마사지 today for great savings.

    ReplyDelete

Post a Comment

Popular posts from this blog

Dilbert on Agile Programing

Dilbert on Agile and Extreme Programming -  Picked up from dilbert.com - Scott Adams.

Big Data: Why Traditional Data warehouses fail?

Over the years, have been involved with few of the data warehousing efforts.   As a concept, I believe that having a functional and active data  ware house is essential for an organization. Data warehouses facilitate easy analysis and help analysts in gathering insights about the business.   But my practical experiences suggest that the reality is far from the expectations. Many of the data warehousing initiatives end up as a high  cost, long gestation projects with questionable end results.   I have spoken to few of my associates who are involved in the area and it appears that  quite a few of them share my view. When I query the users and intended users of the data warehouses, I hear issues like: The system is inflexible and is not able to quickly adapt to changing business needs.  By the time, the changes get implemented on the system, the analytical need for which the changes were introduced is no longer relevant. The implementors of the datawarehouse are always look

Overview of Hadoop Ecosystem

Of late, have been looking into the Big Data space and Hadoop in particular.  When I started looking into it, found that there are so many products and tools related to Haddop.   Using this post summarize my discovery about Hadoop Ecosystem. Hadoop Ecosystem A small overview on each is listed below: Data Collection  - Primary objective of these is to move data into a Hadoop cluster Flume : - Apache Flume is a distributed, reliable, and available system for efficiently collecting, aggregating and moving large amounts of log data from many different sources to a centralized data store. Developed by cloudera and currently being incubated at Apache software foundaton. The details about the same can be found here . Scribe : Scribe is a server for aggregating streaming log data. It is designed to scale to a very large number of nodes and be robust to network and node failures. Dveloped by Facebook and can be found here .  Chuckwa : Chukwa is a Hadoop subproject dev