Categories
Computer Science

Hypervisors

Although hypervisors are useful because they allow different operating systems to share a single hardware host, there are several technical advantages and disadvantages of using a hypervisor in an enterprise. According to vconsulting.us, hypervisors are what is responsible for making the Cloud possible. In addition, the advantages and disadvantages of hypervisors are dependent upon the types being used (Bredehoeft, 2012).

            Type 1 hypervisors are installed directly onto “bare-metal hardware” and they don’t require an additional operating system. There are several types of type 1 hypervisors and they include several names that are familiar even to people who don’t regularly deal with computers. These brands include VMware ESX and ESXi, Citrix Xen Server, Linux KVM, Microsoft Hyper-V, MokaFive, and XenClient. The advantages of using a type 1 hypervisor include installing on Bare-Metal Hardware so that the hypervisor is able to directly access the hardware, the system is thin so it is optimized to have a minimal footprint and enables us to give resources to the host, it is more difficult to compromise the system and therefore provides increased security, it is useful for testing and lab settings, it is capable of supporting more than one virtual machine on hardware, and hardware failures will not affect the operating system.

            There are also several disadvantages associated with using a type 1 hypervisor. Very large virtual machines are not supported (ex. 1000+ GB), it requires particular hardware components, there is a cost associated with the license or support, there is a bad console interface, and not every operating system can be run.

            Type 2 hypervisors are more of an application that can be installed on an operating system rather than directly on the bare-metal. Examples of type 2 hypervisors include parallels, VMware fusion and player, VMware workstation, VMware server, Microsoft Virtual PC, Virtual Box, Linux KVM, and Citrix Receiver. The advantages of using a type 2 hypervisor include their ability to run on a greater variety of hardware because the host operating system is controlling the hardware access, it has an easy to use interface, it can be run in a Windows operating system, it is good for lab testing and development, it allows the use of several operating systems within a single operating system, it creates an easier management paradigm for desktops which is useful for enterprises, it doesn’t have to provide hardware to every user so a company would be able to run their own, and data can be secured on the desktop.

            The disadvantages of type 2 hypervisors include decreased security because of interaction with the VM container and its ability to copy this for additional use, large overall footprint, the type 2 hypervisor must be installed in the host operating system which is straightforward but can sometimes become complex, there is a loss of centralized management, and lastly, type 2 cannot support as many VM’s as type 1 can.

            Microsoft (Hyper-V), VMware (ESXi), and Hitachi’s (Virtage) are commonly used hypervisors in enterprise. Hyper-V was formerly known as Windows Server Virtualization and allows for platform virtualization on x86-64 systems. The architecture of Hyper-V allows the isolation of virtual machines using a partition. At least one parent partition needs to be running Windows Server in order for the hypervisor to be able to access the hardware. Supported guest operating systems include Windows 7, Windows Server, Windows Vista, and Windows XP. This hypervisor has several disadvantages including the fact that it doesn’t support virtualized USB or COM ports, audio hardware is not virtualized by Hyper-V, optical drives virtualized in the guest VM are read only so it’s impossible to burn media to CDs and DVDs, there are reported issues with graphics performance on the host because the translation lookaside buffer is flushed frequently, Windows Server 2008 doesn’t support the maintenance of network connections and uninterrupted service during VM migration, there is a degraded performance for Windows XP users, link aggregation is only supported by drivers that support NIC teaming, and there is no support for home editions of Windows (Conger, 2012).

            VMware ESXi is a type 1 (bare metal embedded) hypervisor that is used for guest virtual servers that run directly on host server hardware. This hypervisor is unique because its placed on a compact storage device, which makes it different from VMware ESX. As previously stated, the VMware ESXi hypervisor’s architecture is built to run on bare metal. In addition, it uses its own kernel rather than a third party operating system. The VMware kernel connects to the internet using hardware, guest systems, and a service console. For VMware ESXi to be able to virtualize Windows 8 or Windows Server 2012, the hypervisor should be 5x or greater (VMware, 2004). There are several limitations of this system including infrastructure, performance, and network. The infrastructure requirements include RAM guest system maximum of 255 GB, a RAM guest system maximum of 1 TB, 32 hosts in a high availability cluster, 5 primary nodes in ESX Cluster high availability, 32 hosts in a distributed resourced scheduler cluster, 8 maximum processors per virtual machine, 160 maximum processors per host, 12 cores per processor, 320 virtual machines per host, and ESXi prior to version 5 will not suppor the latest Microsoft operating system Windows 8 and Windows 2012. The network limitations primarily involve the use of the Cisco Nexus 1000 distributed virtual switch and cause the following limitations: 64 ESX/ESXi hosts per VSM (Virtual Supervisor Module), 2048 virtual ethernet interfaces per VMWare vDS (virtual distributed switch), a maximum of 216 virtual interfaces per ESX/ESXi host, 2048 active VLAN’s (one to be used for communication between VEM’s and VSM), 2048 port-profiles, 32 physical NIC’s per ESX/ESXi (physical) host, 256 port-channels per VMWare vDS (virtual distributed switch), and a maximum of 8 port-channels per ESX/ESXi host (CISCO, n.d.). The performance limitations include an increase in the amount of work that the CPU has to perform in order to virtualize the hardware. According to VMware, the Virtual Machine Interface was developed to correct this issue using paravirtualization, although only a few operating systems support this program.

            According to storageservers, Hitachi offers the “world’s first server virtualization software capable of running on multiple instances” (storageservers, 2012). It has functions that will allow logical petitioning and is useful for multi-tenant style cloud computing. To initiate virtualization, the system integrates Kernel based Virtual Machine technology which runs on top of Hitachi Virtage and is a base for the Hitachi BladeSymphony Server Line. Since this product is relatively new, there aren’t many known disadvantages of the system so users will be selecting to use this hypervisor at their own risk. While the older hypervisors have many known limitations, there are some known ways to enhance these systems; when considering Hitachi Virtage, buyers should be aware that this is not the case. Despite this, the article argues that this hypervisor is useful for reducing the total cost of ownership because it uses server virtualization technology. The article also states that this hypervisor will have a high level or hardware stability and it is compatible with high level Linux systems. It is expected that as this system is used, it will continue to be upgraded and made to be more user friendly and meet the needs of the consumer.

            Certain use of hypervisors could lead to a decrease in total cost of ownership for enterprise. According to the TechTarget article “Time to consider multiple hypervisors?”, many companies are finding that using a single hypervisor isn’t enough for their data centers. Despite this, adding an additional hypervisor can add several risks, so it’s important to consider the advantages and disadvantages of using one hypervisor versus many (Bigelow, n.d.). One of the major reasons that the article argues for using a second hypervisor involves a decrease in the total cost of ownership (TCO) of an enterprise.  In 2010, TechTarget surveyed a series of information technology professionals about their hypervisor choices. They found that cost efficiency was a big issue for a majority of experts, and this drove their decisions; this was mainly a factor when participants wanted to consider an alternative to VMware virtualization. The reasoning behind this involved their need for more features and functionality, want improved interoperability, and desire to avoid a vendor lock-in; although VMware is a good hypervisor, it is expensive and the lock-in would lead to consistent purchasing of only VMware brand products which is a disadvantage if budgets are tight.

            Information technology professionals argue that the cost involved in hypervisor selection isn’t due to the actual hypervisor that is chosen and rather is involved with the virtualization management strategy. While companies that run uniform x86-based servers will be able to get by using a single hypervisor for a good price, they must keep in mind that it will not run as well or offer all of the features on the mainframe, RISC, or SPARC-based servers. Therefore, there is a need to develop a better hypervisor for these systems. To solve this issue, it is useful to use a single hypervisor for server visualization and a separate one for desktop virtualization. The article also notes that organizations may want yet another hypervisor to support their private clouds. In addition, utilizing several hypervisors can be cost effective depending on the company’s technological evolution and acquisition of these systems. For example, a company may start out with a basic hypervisor to suit their needs. As the company evolves and needs a hypervisor that can support different capabilities, they should switch the old hypervisor to a new function instead of replacing it altogether. Now the company will be running two hypervisors and be able to virtualize information more efficiently without wasting any money.

            Considering the direct cost of hypervisors is an additional issue in an enterprise. Most hypervisors are free to try and some even come with the Windows Server operating system. However, it is important to understand that acquiring a hypervisor costs more than just obtaining the software license. The cost comes into play mainly when considering the features that come along with the hypervisors in addition to the management tools needed to keep track of the virtual data center. In addition, running several hypervisors increases cost because more IT resources will be needed to support and maintain these platforms. As a consequence, enterprises that considers a second hypervisor need to carefully think about the features and capabilities of their second hypervisor. If the second hypervisor leads to more efficient or better performance in terms of computing, this will save the company money overall because they will be able to perform their basic job functions more accurately.

            Unfortunately, using a single hypervisor or multiple ones is not a one size fits all cost-saver across the board. Companies that use more data and have many systems will likely benefit from using multiple hypervisors while smaller companies that don’t require a lot of data will benefit from using only a single hypervisor. When considering total cost of ownership for hypervisors, it is essential that an enterprise takes all of its computing needs into consideration before making the decision. Additional costs that companies will have to consider before budgeting for one of many hypervisors is the cost of replacement for when they will need to purchase a new hypervisor to completely replace an existing one. They should calculate the costs that business disruptions will cause in addition to considering that data may be lost or that overall performance may decrease depending upon their hypervisor selection and the physical installation process.

            The implementation of hypervisors has a clear impact on system administration. If a company needs to maintain their current hypervisors, replace, or add a new one, the information technology department of their company will need to remain highly involved for several reasons. First of all, the system administration should review the choices they have for hypervisors and pick the ones most suitable for their company’s needs. It is unlikely that any other department would fully understand the technical implications of the hypervisor; depending on which one is selected, there may need to be changes in the operating systems that the company’s network is using in addition to installation of software that would be compatible with the hypervisor. In addition, even if the IT department isn’t responsible for physically installing the hypervisor, they need to be made fully aware of how it was installed and what to do to troubleshoot and optimize the system should the need arise. Lastly, the IT department would be responsible for deciding whether the company would be better off using a single hypervisor, multiple hypervisors, and which kinds.

            Since the system administration is generally responsible for computer and network safety, they should be especially concerned with using hypervisors. Although they can increase efficiency, they could potentially provide a security threat; therefore, additional system administration is essential in order to ensure that all employees know how to protect their company’s computers against these threats. Malware and rootkits can take advantage of hypervisor technology by installing themselves below the operating system which makes them more difficult to detect. In this situation, the malware is able to intercept any of the operations of the operating system without anti-malware software being able to detect it. While some information technology professionals claim that there are ways to detect the malware using a hypervisor-based rootkit, this issue is still up for debate (Wang et al., 2009). This technology is relatively new and we cannot truly be certain that these security issues can be removed.

            A second reason why the systems administration would require additional training would be due to the use of x86 architecture that is typically used in PC systems. Virtualization is generally difficult on this type of system and requires a complex hypervisor; to solve this issue, CPU vendors have added virtualization assistance to their hypervisors such as Intel products and AMD’s. These provide support to the hypervisor which allow it to work more efficiently. Other solutions to this issue include modification of the guest operating system to make system calls to the hypervisor using paravirtualization and the use of Hyper-V to boost performance. It is essential for the information technology staff to be aware of the price hypervisor their company is using in addition to any potential modifications that can be associated with it. Therefore, additional training and troubleshooting to resolve matters related to their hypervisors would be useful.

References

Bigelow, SJ. (n.d.). Time to consider multiple hypervisors?. SearchDataCenters. Retrieved from             http://searchdatacenter.techtarget.com/tip/Time-to-consider-multiple-hypervisors

Bredehoft, J. (2012). Hypervisors – Type 1/Type 2. Retrieved from http://vconsulting.us/node/24

CISCO. (n.d.). Cisco Nexus 1000V Series Switches Data Sheet. Retrieved from             http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/data_sheet_c78-     492971.html

Conger, J. (2012). Video: Microsoft Hyper-V Shared Nothing Live Migration. Jason Conger     Blog. Retrieved from           http://blogs.technet.com/b/uspartner_ts2team/archive/2012/07/23/shared-nothing-live-            migration-on-windows-server-2012.aspx

Storageservers. (2012). Hitachi offers world’s first Server Virtualization software capable of     running on multiple instances. Retrieved from     http://storageservers.wordpress.com/2012/09/13/hitachi-offers-worlds-first-server-     virtualization-software-capable-of-running-on-multiple-instances/

VMware. (2004). Support for 64-bit Computing. Retrieved from Vmware.com

Zhi W, Xuxian J, Weidong C, Peng N. (2009). Countering Kernel Rootkits with Lightweight     Hook Protection. Microsoft/North Carolina State University. Retrieved from             http://discovery.csc.ncsu.edu/pubs/ccs09-HookSafe.pdf

Categories
Computer Science

Virtual Memory Manager

Abstract

Virtual memory managers are used to support applications execution.  In the early institution of operating systems the application was loaded into the physical memory of the computer and ran on that memory system.  Virtual memory management allows for multiple applications running simultaneously, management of the physical memory as well as the enhanced capacity and utilization of the memory by swapping resource allocations.  The overall purpose of the virtual memory manager is to increase capability, capacity and efficiency of the use of the physical memory by allocating resources, taking advantage of fragmented memory and simplifies the organization of the application and data.  This is accomplished by establishing virtual memory addresses that are mapped to the physical locations of the application’s data.  Modern computer architecture utilizes virtual memory management as an integral part of hardware support and design.  The benefits of virtual memory management range from protecting applications from interfering with each other’s processes, creating the ability to share resources and simultaneous access to memory.

Categories
Computer Science

Managing Contention for Shared Resources on Multicore Processors

1. In order to share several hardware structures, the advanced multicore systems are planned that enables clusters. For example LLCs (last-level caches such as L2 or L3), memory controllers, interconnects and prefetching hardware. In addition, the resource-sharing is advised as the clusters work as memory domains. The shared resources act as a memory pyramid. The threads that are present on cores in the memory domains participate for shared resources in a contention-free atmosphere (Bischof, 2008). In order to understand the function of contention for shared resources and its effects on application and performance, an example is given below. There are four applications Soplex, Sphinx, Gamess, and Namd (Balasubramonian, Jouppi, & Muralimanohar, 2011). These applications are taken from SPEC (Standard Performance Evaluation Corporation) CPU 2006 benchmark suite. This runs parallel on an Intel Quad-Core Xeon system. On three altered schedules, a test would run on group of applications for number of times. Each time two different paring is made that shares memory domain. Moreover, within the same domain the three pairing permutations run these applications (FEDOROVA, BLAGODUROV, & ZHURAVLEV, 2010).

  • Soplex and Sphinx ran in a memory domain, while Gamess and Namd shared another memory domain.
  • Sphinx was paired with Gamess, while Soplex shared a domain with Namd.
  • Sphinx was paired with Namd, while Soplex ran in the same domain with Gamess.

2. The main purpose is the captivity of essence of memory-reuse profiles. This can be done in a very simple metric. After metric was discovered an approximate way is determined while utilizing information present in a thread scheduler. In addition, the memory-reuse profiles are successful at modeling contention mainly due to implementation of two core qualities. These core qualities are sensitiveness and greatness. The sensitiveness of threads describes the issues a thread has to face while sharing cache along with other threads. The greatness in terms of thread sharing means that how much a thread damages other threads while sharing with cache. In fact, major information is captured by the sensitiveness and greatness of threads within memory-reuse profiles. In order to approximated the sensitiveness and greatness while using online performance of information we need to check their bases for modeling cache contention between threads. The information is derived from metrics in memory-reuse profiles in order to achieve sensitiveness and greatness (FEDOROVA, BLAGODUROV, & ZHURAVLEV, 2010).

3. While assessing new models for the cache contentions, following objectives are considered such as, the ability of models for generating contention-free thread schedules. A model is constructed that would help to search for top or to avoid worst schedules. Therefore, these models are assessed on the basis of constructed merit schedules. Pain metric is used to evaluate scheduler that finds top schedule. For instance, consider a system that contains two pairs of core sharing and two caches. Moreover, this model also works boundless with the additional cores per caches. In this scenario, we need to analyze and identify the finest schedule for all the four threads. Likewise, the scheduler will develop permutations of threads available within the system as, all the permutations are exclusive in terms of pairing with each other in memory domain. For example if there are four threads present named as A, B, C and D than the exclusive schedules formed are: (1) {(A,B), (C,D)}; (2) {(A,C), (B,D)}; and (3) {(A,D), (B,C)}. Here the notations (A, B) describe the co-schedulers of threads A and B in similar memory domains. The pain regarding each and every pair is calculated by the scheduler for every schedule such as, {(A, B), (C, D)}. Moreover, the Pain (A, B) and Pain (C, D) is calculated by scheduler with the help of equations that are present formerly. The calculated amount is considered as an initial Pain values that are determined via equations. The lowest values of Pain are considered as the ‘Estimated Best Schedule’ by Scheduler. This Pain metric is developed through actual memory-reuse profiles. This pain metric helps to determine the best schedule. Another method for determining is via approximating the Pain metric using online data. After the best schedule value is determined, the performance is associated with the workload in order to obtain the ‘actual best schedule’ from the ‘estimated best schedule’. For this purpose, the ‘estimated best schedule’ is run over the real hardware and its performance is than associated with the ‘actual best schedule’. By running all the schedules on real hardware we can obtain the best value. Moreover, by running all the possible schedules on hardware may limit the workload as the large number of workloads are consuming at the same time. This is the most direct way to determining the ‘actual best schedule’ (FEDOROVA, BLAGODUROV, & ZHURAVLEV, 2010).

4. The reason of contention in multicore systems is discussed in this answer. A number of experiments are performed in order to determine the main cause of contention on multicore systems. Initially, the degree of contention is measured, separated and divided into several kinds of shared resources for instance, Cache, Memory Controller, Front-side bus and Prefetching Hardware (Das, Thankachan, & Debnath, 2011). The detailed arrangement related to the experiments is defined in this study. At first stage we need to focus on the contention for the shared cache. This removes the cache while competing with the threads from cache lines. However, this is not considered as the main reason for performance degradation in the multicore systems. The main reason that is involved in performance degradation in shared resources includes

  • Front-side bus
  • Perfecting resources
  • Memory controller

It is very difficult to describe the limitations and there effects on contention for prefetching hardware. The influence of prefetching reveals the combined effect of all the three factors and hardware itself. In this experiment the longstanding of memory-reuse model that is constructed for model cache contention are not effective. Furthermore, the model contention other than shared cache if applied in a real system did not prove efficient performance in the presence of other kinds of contention. On the contrary, the cache-miss rate is an outstanding analyst for contention for the memory controller, Prefetching hardware and front-side bus (FEDOROVA, BLAGODUROV, & ZHURAVLEV, 2010). Along with the Milc application, each and every application is co-schedule in order to create contention. The memory controller along with front-side bus would engaged by the cache missing applications. This would not damage the applications that are present in hardware. The high LLC miss rate in an application will destructively use prefetching hardware. This is due to the cache that is requested by the data counted via cache misses. Thus, for heavy usage regarding prefetching hardware a high miss rate is an excellent indicator (FEDOROVA, BLAGODUROV, & ZHURAVLEV, 2010).

References

Balasubramonian, R., Jouppi, N. P., & Muralimanohar, N. (2011). Multi-core cache hierarchies Morgan & Claypool.

Bischof, C. (2008). Parallel computing: Architectures, algorithms, and applications Ios PressInc.

Das, V. V., Thankachan, N., & Debnath, N. C. (2011). Advances in power electronics and instrumentation engineering: Second international conference, PEIE 2011, nagpur, maharashtra, india, april 21-22, 2011. proceedings Springer.

FEDOROVA, A., BLAGODUROV, S., & ZHURAVLEV, S. (2010). Managing contention for shared resources on multicore processors. Communications of the ACM, 53(2), 49-57.

Categories
Computer Science

General Security Policy

  1. Scope

This policy is applicable to all information resources, systems that are internally connected, employees and third parties who have access to the organization wireless network. The scope of this policy will also cover all the legacy and future equipment that will be configured and tuned as per the reference documentation (Barman, 2002).

Categories
Computer Science

Security Product Web Sites

There are several security product websites in the webisphere that offer products for individuals or companies for the purpose of internet security. As a Wireless Security Professional, (WS) the responsibility is to wade through all the websites and find the right products that fit the company. The purpose of this paper is to evaluate the websites of Interlogix and Websense Security Products, comparing their strengths and weaknesses, their products, and areas where they need improvement.

Categories
Computer Science

Google Chrome: The Upside of Process

Introduction

Despite many consumer preference changes and the rise in developing technology, Google has remained a leader in information technology, particularly in the field of software development. Apart from its search engine, it has many high-functioning platforms, one of which is Google Chrome, its own internet browser. Google uses the ‘Agile’ process in development of its web browser: an incremental, modifiable platform from which all applications are linked. The process model that is used to make this web browser function, the application of the system and its software development, as well as the suitability of such a model for this corporation is discussed below.

Categories
Computer Science

Analytical Agent Technology for Complex and Dynamically Evolving Environments

Abstract

The importance of simulation environments stretches throughout the academic and corporate hemisphere as they have dealings in not only aeronautics, electronics, and chemistry, but also biology, and physicist. All these different industries rely heavily on simulators in order to better understand and validate the numerous theories that help to explain the complex and dynamic systems. The purpose of this project is investigating of simulation agent-based paradigm for designing and developing frameworks that potentially focuses on monitoring of the systems. Within this research, the use of multi-agent simulators and multi modeling agents will offer a new perspective that can be used to address the evolving complex and dynamic systems that will differ from the explanation through mathematical equations. The techniques created will be devised in developing multi-agent simulators that have the ability to track the dynamic evolving system simulated.

Categories
Computer Science

Robust Security Networks

Robust Security Network (RSN) are exclusively for wireless networks, “that is a network protocol that is used for creating and establishing the secure communication connections and transmissions over the safe wireless networks such as 802.11.”(Riaz, n.d) Robust Security Networks are virtually known as WPA2, replaced the standard WEP security specification. Robust Security Network is broken down into five phases of operation, Discovery, Authentication, Key Generation and Distribution, Protected Data Transfer, and Connection Termination. Within the typical Robust Security Network, it operates a general internet protocol that wirelessly sends probe requests that are received by access points that respond back with the complete RSN information and IE frame information exchange. The wireless request can be in the forms of NIC that are authenticated by the approved technique installed where the access points will provide authentication, and exchange of information until the appropriate interface is received.

Categories
Computer Science

Agile Development Methodologies

Agile development methodologies (ADM) have been gaining popularity because it is increasingly becoming difficult to adopt static plan-driven methods due to constantly changing requirements of software projects as a result of both internal and external factors. ADM solves the dilemma by breaking software projects into components each of which can exists on its own and is fully compatible with the other components. As a result, any change or new requirement doesn’t disrupt the whole project or requires starting from the scratch. Each component or reiteration is not only shorter in length but also carries lower complexity and risk levels. There are four types of agile development methods which are extreme programming, crystal methods, scrum, and feature-driven development.

Categories
Computer Science

Wireless Security Action Plan

Comparison between Different Wireless Standards

802.11a

The 802.11a standard supports speed up to 54 Mbps. In addition, it utilizes the 5 GHz band. For transmission, the 802.11a standard uses the OFDM transmission technique. There are several different types of data rates that can be used by 802.11a standard such as 6, 9, 12, 18, 24, 36 and 48 Mbps.

Categories
Computer Science

Beyond Hardening

Wireless networks are now preferred instead of wired communications, as they are more convenient, portable and easy to use. However, wireless networks portray many security risks, as data is transmitted and received wirelessly. Likewise, these risks must be addressed for protecting the information assets and critical data. After hardening the wireless network by all means, three factors should be addressed i.e. confidentially, integrity and availability. For addressing these factors, organization need to conduct risk assessment in order to evaluate security controls placed for the wireless networks (Buttyán & Hubaux, 2008). Objective is to main the security for wireless networks via on ongoing maintenance process. Some of the factors that need to be addressed for maintaining wireless network security are given below:

Categories
Computer Science

Virtualization Across the Board

Virtualization

A single piece of hardware is divided into multiple instance by a technology is known as virtualization. The hardware and an operating system (OS) is the same before virtualization. The division of hardware makes the resource transmission simple as, an operating system (OS) needs to utilize all resources that are available in a box. A single box with the help of resources can host several instances or nodes. For example RAM, CPU, Permanent storage space (I/O capacity), network address (Bandwidth) etc. Likewise, this procedure facilitates an effective method for resource management of web application.  For instance, a web application that is hosted on a relatively small node works on reduced cost of using entire box and still manages to provide resources from one node to another. In order to enhance the efficiency an unused resources in a box are moved.  Optimal resource management is achieved by virtual server; we can also say that there is no requirement for deploying a web application and its components for migration of web services to some other hosts that are subjected to re-installation of new operating system. Consequently, virtualization fulfills the last process by utilizing a hypervisor. Likewise, the hypervisor hosts operating system that provides management of resources for many guest operating systems and nodes. Hence, when a web application is deployed with virtualization technology, we can get a guest operating system powered by hypervisor. This is the major factor that provides scalability and assignment of more resources for the web application by clicking few buttons (Kusnetzky, 2011).

Categories
Computer Science

WPA, WEP, & TKIP

Wi-Fi Protected Access

In the year 1999 the standard was created for the WPA (Wi-Fi Protected Access). For the Wi-Fi wireless computer systems, the WPA works as a security technology. In addition, this technology was created in response for some vulnerabilities regarding WEP in networking industry. The two standard technologies i.e. Temporal Key Integrity Protocol (TKIP), Advanced Encryption Standard (AES) are used along with WPA to provide stronger encryption as compared to WEP (Wi-fi protected access.2007). Moreover, the WPA comprises an authentication support. On the contrary, WEP does not contain any authentication code. WPA is easy to use and provides security to VPN tunneling along with WEP.

In order to utilize WPA in home networks, several variations are made and are known as Pre-Shared Key or WPA-PSK. This is considered an easy form of WPA but still very powerful for home networks. In fact, a static key is defined in order to use WPA-PSK. Moreover, this will make more difficult for the hackers to damage any information. There are several other WPA variations available that contains technical enhancements. In an organization, the WPA is utilized as an authentication server. The role of this server is to supply central access control and management system within the organization. The WPA can also be used as a pre-shared key mode in small companies or in houses. Moreover, in small organization the WPA does not require any authentication server (WiWi-fi protected access.2007).

 

Temporary Key Internet Protocol

Temporal Key Integrity Protocol (TKIP) specification is defined by IEEE as 802.11i, as it addresses the encryption algorithm of wireless connectivity. Likewise, the other part supervises integrity of messages. TKIP was constructed with a limitation factor i.e. the hardware on which it operates and therefore, there is no requirement of advanced encryption. Likewise, TKIP forms a wrapper that transmits within the established WEP encryption (Temporal key integrity protocol.2007). Moreover, this protocol comprises of RC4 algorithm engine similar to WEP. However, there is one change i.e. the TKIP key is 128 bits long that may lead to a resolution of issues related to short key length.

Wired Equivalent Privacy          

Wired Equivalent Privacy is categorized as security protocol that is defined by IEEE 802.11b standards, as it is constructed to provide adequate level of security to a Wireless Local Area Network. In general, a wired local area network is safeguarded by wall channels and protective coating on CAT 5 cable. Whereas, WLAN operates on radio waves and it is accessible anywhere within the given radius. Moreover, WEP searches for the same level of security that is presented by Wired LAN via deploying encryption over data transmission on WLAN. Data encryption encapsulates the data and passes it through a secure tunnel from source to destination. In this way, vulnerable loop holes are prevented along with unauthorized access, man in the middle attacks, hacking etc. furthermore, for end to end protection of data, Virtual Private Networks are also used via a dialer and user credentials. The VPN also establishes a secure tunnel via Point to Point tunneling Protocol (PPTP).

 

References

Temporal key integrity protocol. (2007). Network Dictionary, , 484-484.

Wi-fi protected access. (2007). Network Dictionary, , 526-526.

 

Categories
Computer Science

Encryption Algorithms (Symmetric & Asymmetric)

Encryption algorithms are very important to many applications, both in computer science as well as in other fields. For an application designer, understanding the different types of encryption algorithms and different encryption algorithms is very important. The biggest distinction within encryption algorithms is whether or not an algorithm is symmetric. Symmetric algorithms, such as AES, use the same private key (also called the secret key) at both the sending and receiving sides. On the other hand, asymmetric algorithms (usually known as public key cryptography) such as ECC and RSA use a public key and private key; the private key is only known the receiver, while everyone has access to the public key. The public key is used to encrypt a message and the private key, only held by the owner, is used to decrypt the message. The design of asymmetric algorithms do not allow anyone with a public key to decrypt other messages. In this paper, we shall examine 3 encryption algorithms: AES (the Advanced Encryption Standard, a symmetric algorithm), ECC (Elliptic Curve Cryptographic, an asymmetric algorithm), and RSA (name after its discoverers, Ron Rivest, Adi Shamir and Leonard Adleman, an asymmetric algorithm).