Southeast Asian countries, which largely include China and its neighboring countries, are greatly known for their being highly cultural and traditional. This basically imposes the idea that suggests that such countries depend much on how their ancestors tried to manage their communities through the utilization of religious principles alongside the oldest guidelines that are provided by their ‘elders’. Considering their background culture as a huge element towards the particular progress that they would want to incur, such regions give high regard to direction and proper guidelines that are provided to them through the history of their people. As noted, imperialist aggression shattered the fond dreams of the Chinese about learning from the West. It was very odd — why were the teachers always committing aggression against their pupil? The Chinese learned a good deal from the West, but they could not make it work and were never able

to realize their ideals (Mao Tse Tung, 3). This is the reason why the Chinese nation strongly depends on the aspects of communism and the principles by which it directs the society. In relation to this matter, this is practically the reason why the Chinese community [and all other nations noted to be related to their race] specifically follow the administration’s direction. They do believe that the constricts provided through the governance under a communist administration would provide them the chance to follow the path towards progress. Between the years 1945 towards 1947, this was evidently shown through the lifestyle of the people within the said region in Asia. Practically, this change has affected the overall economic condition of the country. People had the chance to seek the manner by which they were to actually take a role in relation to the path of progress that the country’s government wants to take as dictated by the administrators. Relatively though, while they were given the option to become excellent in what they are good in, they were not given the liberty to chose what they want to be excellent in. Nevertheless, this form of manipulation specifically boosted the economic status of the nations in the region.

            Children are categorized according to what they can do, lead to schools that help them perfect the said conditions of expertise. They are directed to become the perfect individuals who could make the society they are living in particularly ‘a better place to live in’. In a way, this approach of governance is considered as a restriction on the manner by which the rights of the people to freedom is recognized. Relatively though, for the administrators, this approach was one of the reasons why China and all its neighboring countries were able to withstand the different conditions that the years beyond 1947 offered them. Overall, it could be realized that even though the approach of the western countries towards progress was different, the path chosen by the Southeast Asians have become the best choice for their administration and their people as they embrace development towards a new and modern age.


Mao Tse Tung. On the People’s Democratic Dictatorship: In commemoration of the twenty-eighth anniversary of the communist party of China. Selected Works of Mao Tse Tung.

1900: A Preview of the Twentieth Century.

Legal Issues

Government Investigations and Access to Information

The Fourth Amendment to the U.S. Constitution states that “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.” This amendment is an important proscription against the potential overreach of governments or public agencies seeking information from private citizens, businesses, and other organizations. Government agencies attempting to make a legal case against an individual or an organization must adhere to legal restrictions and guidelines in their efforts to obtain information useful in investigations and legal proceedings. The public’s right of access to information is also limited and restricted, with laws such as the Freedom of Information Act (FOIA) proscribing what, when, and how information is to be made public.

Governments have a number of ways in which to gather information. In the context of government oversight and regulation of industries, financial sectors, and other areas of concern, the means by which regulators and government agencies can and does vary according to particular circumstances and situations. At the most basic level, regulators often receive (and transmit) information to and from those that are subject to such regulation. Put simply, “regulators talk to the regulated”(Carter and Harrington, 2000). The information gathered this way is often “communicated in an informal manner and on a voluntary basis,” (Carter and Harrington) and it is this typical form of information-gathering that underpins much of government oversight and regulation.

In instances where the government is attempting to bring a legal case against a business of organization over which it has regulatory power, the need for access to information is of significant concern; acquiring information is what gives regulators the requisite power to successfully prosecute such cases. As an example of how regulatory agencies exert power and gather information in the context of legal proceedings, it will be helpful to examine several specific instances of such activities by a government agency. While the Drug Enforcement Agency (DEA) is responsible at the federal level for the U.S.’ efforts to combat the problems associated with illegal drugs, it is also responsible for regulatory oversight of the handling of legally-prescribed controlled medications. In recent years, the abuse of prescription pain medications and other prescription drugs has received significant attention in the media, and has prompted both greater scrutiny and legal action on the part of the DEA (, 2013). When conducting investigations related to the handling and dispensation of medications subject to the Controlled Substances Act (CSA) the DEA must use a number of methods to acquire useful and necessary information.

At its core, the most fundamental question about the government’s interest in acquiring information is whether or not the request is reasonable. Government agencies and representatives who seek information from private citizens or from businesses and other organizations must have a reasonable purpose for asking for such information (Carter and Harrington). If the request for information or access might potentially turn up evidence that a crime or crimes have been committed, it is sometimes necessary to determine if the request for access is reasonable before such access is granted. For example, if the Occupational Safety and Health Administration wishes to carry out an inspection of a business, the OSHA representative typically needs a warrant (Carter and Harrington). If consent for the inspection is granted without a representative of the business requesting a warrant, the information uncovered in the inspection is typically useful and admissible in legal proceedings. If the request to produce a warrant is denied, then the business does not have to grant consent for the search unless and until OSHA is granted a warrant.

In the context of DEA oversight of the handling of legally-prescribed controlled substances, the agency is legally afforded strict control and oversight of the activities of physicians and other prescribers, pharmacists and pharmacies, wholesale and retail distributors of such medications, and the companies that manufacture these medications ( A significant body of regulatory procedures, guidelines, and laws has been established that are intended to assure that the activities of these various actors are transparent and that the information related to such activity is easily and readily available to the DEA. Officials from the DEA and the U.S. Department of Justice (DOJ) have used this information to build cases against individuals and organizations that have violated the relevant policies and laws related to the handling of controlled prescription medications; when such information is not forthcoming, these same officials have used the lack of information as the substantive standard for demonstrating a reasonable need for this information (, 2013).

In recent years the state of Florida has received intense scrutiny from the DEA for activities within the state that fail to adhere to the guidelines and laws that regulate the handling and dispensation of a number of medications covered by the CSA. According to DEA officials and government prosecutors, Florida has been home to a significant number of so-called “pill mills” (; these pill mills are doctor’s offices that prescribe (and sometimes even dispense) pain medications and other controlled substances at rates that the DEA considers to be outside the boundaries of legitimate medical use. These medications are subject to widespread abuse by addicts, and the DEA has charged that the pill mills are contributing to the problem by flooding the streets of Florida with these medications (

Oversight of these medications is supposed to be strict, with doctors limiting the amount of medications they prescribe, pharmacists and pharmacies watching for prescriptions in amounts that should raise red flags, and distributors flagging orders from pharmacies that appear to be excessive. In a recent case that was settled between the DEA and Walgreen’s pharmacy, a large national chain, Walgreen’s agreed to pay a fine of $80 million and to overhaul its methods of handling and dispensing controlled medications ( Among the charges that the DEA leveled against Walgreen’s were accusations that the chain and several of its individual pharmacy locations had failed to maintain adequate records and failed to notify the DEA, per statutory regulation, about suspicious prescriptions at the retail level and excessive orders at the distribution level ( It is this sort of information that would –or at least should- normally be communicated between the regulated and regulators. When discrepancies in records and errors in reporting were uncovered by the DEA, these failures served as the foundation of the agency’s requests for the warrants needed to uncover more information and ultimately to successfully prosecute their case.

Although the DEA has made the argument that the availability of these controlled medications on the streets poses a significant public health risk, there are a number of factors that make it difficult, or sometimes impossible, for the public to acquire information related to the activities of the DEA both in terms of these investigations and of the overall activities of the agency. The Freedom of Information Act (FOIA), which ostensibly makes it possible for the public to request information about the activities of government, does not always assure that such requests will be met. A number of organizations and individuals who have made FOIA requests to the DEA and DOJ about the activities of the DEA have been denied; according to a recent report, the rate at which such FOIA requests about the DEA have been denied has jumped 114% since the beginning of the administration of President Barack Obama (Rumsey, 2012). Other legal restrictions, such as those that protect the privacy of patients’ medical records, further limit the amount of information available to journalists and other investigators where the DEA is concerned.

There are laws that further protect and enhance the rights of the public to access information about some aspects of the medical and pharmaceutical industries. The Sunshine Act for physicians provides transparency of payments from pharmaceutical companies to physicians, which can expose instances where physicians are receiving payments from the same companies whose medications these doctors are dispensing ( Sunshine Laws offer little in the way of providing access to information about the activities of the DEA, however, despite the fact that the activities of this agency may be as significant an area of public concern as are the activities of the individuals and organizations the DEA regulates. In any instance where information is useful and necessary, whether for use by the government of for the edification of the public, there are a number of laws that are intended to ensure such access.


Carter, L. H., & Harrington, C. B. (2000). Administrative law and politics: Cases and comments. New York, NY: Longman. / Denver News Releases, 05/30/13. (2013, May 30). Retrieved from

Pharma Compliance: License Verification | Healthcare Data Solutions | Healthcare Data Solutions. (2012). Retrieved from

Rumsey, M. (2012, July 17). The News Without Transparency: DEA FOIA rejections have increased 114 percent since the end of Bush administration – Sunlight Foundation Blog. Retrieved from

Walgreens agrees to pay a record settlement for civil penalties under the Controlled Substances Act. (2013, June 11). Retrieved from


Painkillers “Walgreens”


Environmental Issues

DiMento, JFC, Doughman, PM. (2007). Climate Change: What It Means for Us, Our Children, and Our Grandchildren. The MIT Press, p. 3. Retrieved from    e_summary_r&cad=0#v=onepage&q&f=false

One of the major reasons that the world has yet to make progress towards slowing down and reversing human effects on the earth is due to social reasons. Many politicians argue that enacting climate change policy is too expensive and not worthwhile. In the United States, the division between the political opinions of Democrats and Republicans is a major contributing factor to this situation. Typically, Republicans claim that global warming is a hoax while Democrats actively advocate for environmental policies.

Knight J, KenneyJJ, Folland C, Harris G, Jones GS, Palmer M, Parker D, Scaife A. (2009). Do              Global Temperature Trends Over the Last Decade Falsify Climate Predictions?   Bull.Amer.Meteor.Soc,. 90 (8): S75–S79. Retrieved from

Global warming is typically thought of a process that involves the constant warming the earth’s surface. This study shows that global temperature change has slowed down within the last decade. Despite this, there is clear evidence that there has been climate change over a long time period and this is especially apparent when we graph temperature changes over each year.

NOAA. (2007). Patterns of greenhouse warming. GFDL Climate Modeling Research Highlights.             Retrieved from            action/user_files/kd/pdf/gfdlhighlight_vol1n6.pdf

This pamphlet summarizes the findings of the Geophysical Fluid Dynamics Laboratory. These studies traced the patterns of greenhouse warming and concluded that summer warming over continents may be accompanied by drier soils, the most warming is expected to occur during the winter in North America and north-central Asia, warming in this century will occur more over land than oceans, and the increase of surface air temperatures in response to increasing greenhouse gas levels will not be geographically uniform.

Raupach R, Marland G, Ciais P, Le Quéré C, Canadell G,  Klepper G, Field B. (2007). Global   and regional drivers of accelerating CO2 emissions. Proceedings of the National          Academy of Sciences, 104 (24): 10288–10293. Retrieved from 

Carbon dioxide emissions are responsible for trapping heat in the atmosphere which is directly responsible for causing the Greenhouse Effect. These emissions have been increasing over the years as a result of several factors including more frequent use of fossil fuels and industrial processes. The article discusses the distribution of carbon dioxide emissions across the world, reasons for its release, and provides information on the growth of this practice.

Wentz FJ, Ricciardulli L, Hilburn K, Mears C. (2007). How Much More Rain Will Global         Warming Bring? Science, 317 (5835): 233–5. Retrieved from   

This article discusses the relationship between global warming and rain. It has been determined using climate models and scientific observation that the amount of water stored in the atmosphere will increase as the surface of the earth becomes warmer. While former studies indicate that precipitation will be released at only a portion of this rate, the group discovered that precipitation and surface warming have a direct relationship and an increase in one will lead to a proportional increase in the other. Therefore, global warming will cause a greater extent of precipitation.


Advantages and disadvantages of the hospital payment systems


Advantages and disadvantages of the following hospital payment systems on cost containment and provider behavior:

  • Fee-for-service
  • Per diem
  • The DRG-based payment system (i.e., Medicare’s Inpatient Prospective Payment System)
  • Capitation


         A major advantage of the Fee-for-service (FFS) payment model is that services are paid for separately and not together as other plans. Importantly, it is beneficial to both hospital and the provider since through this payment system earnings can be increased because it patients are charged for each intervention. For example, if a patient had a surgery, the surgeon is paid for the operation whereas the hospital stay is recorded separately benefiting from the procedure. Hence, opportunities to provide more care since these are billed individually. In countries such as Japan fee –for-Service Payment methods are connected to national pricing to contain cost within health care organization

      A notable disadvantage of this method, however, is that patients tend to be offered treatment, which unnecessary, but are added because the physician can derive a fee for the service. In this case the emphasis flows away from quality care towards quantity care critics argue that it is not cost effective because the focus in on quality and not quality. As such, whether patients are heard regarding their complaints is unimportant to both hospital and physician. Subsequently, efficiently is greatly compromised since the goal is more towards improving the censuses and not quality of care (Fuchs, 2009).           

            Per diem:-

         Per diem is a limited model of the prospective payment technique whereby patients pay a daily price or rate for their health care services when hospitalized. Reimbursement is through a third party payer. An example of this system and its advantages for healthcare organizations, especially, hospitals is one practiced by the Indian Health Service whereby they found it useful to combine these payment strategies with supplemental health insurance plans. It has been executed with such dexterity that the payment system has been a tradition in that society regarding fairness of reimbursing physicians for services rendered to patients who are being hospitalized for extended periods of time (Casto & Layman, 2006).

        Critics argue that the method can be exploitive to patients because providers do take advantage of the opportunity to increase the days patients remain hospitalized or hospitalize patients unnecessarily. While all of this may be true the system is cost effective because calculations of daily rates are far less complicated than coding charges per service. Therefore, cost is contained and the health care facility makes a greater profit than in many other payment methods (Casto & Layman, 2006).


     A great advantage of capitation payment method in health care relates to the third party payer reimbursement strategy. This is calculated based on providers being afixed a certain amount per given period, per capita amount for a period’ (Casto & Layman, 2006, p 4). The terminology per capita pertains to per head or on per person per month (PMPM). Usually, this is the amount of money paid to the provider or hospital on a monthly basis one the client/patient is enrolled in the health insurance plan. It means that providers receive payment for services of all group members regardless of whether the patient is seen or not. Therefore, this is a tremendous advantage for maintenance organizations (Hughes, 2004).

      Consequently, the amount of services has no effect on payment as it relates to increase because there is a set amount of money allotted to the organization or provider for that period. As such, if the entered into an agreement to offer a certain amount of services within a given period of a set of employees this is the payment that will be received. This a notable disadvantage, but it can still contain cost to patients in long term care facilities (Casto & Layman, 2006).

The DRG-based payment system (i.e., Medicare’s Inpatient Prospective Payment System)

         An advantage of DRG-based payment system is the assignment of a specific DRG weight by Centers of Medicare Services to each patient’s accessibility of care profile. This weight gives an estimate regarding the services that are available to that Medicare recipient in the DRG program. It also helps the medical record department to align these resources to those received by other recipients. The purpose of all these weights is mainly for accurately giving account of cost differences among various treatments administered by care providers during hospitalization. Conditions that cost more are ascribed a higher DRG weight for accountability. Examples of weight ranges are ‘the fiscal year 2001 the DRG weights ranges were .5422 for a concussion (DRG 32) to 1.4966 for viral meningitis (DRG 21) to 19.0098 for a heart transplant (DRG 103).29’ (Blount & Waters 2001, p 12).

          However, while the weight assignment is a great advantage of cost containment to hospitals and providers non – physician services provided by hospitals cannot be reimbursed though this system. The organization or provider has to access another resource for submitting such costs directly for reimbursement through PPS (Blount & Waters 2001).


 Blount, L. L., & Waters, J.( 2001). Managing the Reimbursement Process. 3rd ed. Chicago:

                  AMA Press

Casto, B., & Layman, E. (2006). Principles of Healthcare Reimbursement. American Health

                  Information Management Association

Fuchs, V. (2009). Eliminating waste’ in health care. Journal of the American Medical

              Association, 302 (22), 2481–2482

 Hughes, J. Averill, F.  Eisenhandler, J. Goldfield, N.Muldoon, J.  Neff, M., & Gay. J. (2004).

           Clinical risk groups (CRGs): A classification system for risk-adjusted

            capitation-based payment and health care management.

            Medical Care 42 (1): 81




             The case study for this lesson encompasses hospitalization costs of a 70 year old woman who underwent kidney transplant at a general hospital. She accumulated a total of $150,000 in Medicare-approved charges associated with the procedure. This report outlines individual cost pertaining to the DRG Description Case Weight; 115; Permanent Cardiac Pacemaker; 3.5513; 302 Kidney Transplant 4.1370 and 441 Hand Procedure/Surgery 0.8785. Related to the surgery itself cost will be calculated for operating; capital payment for the hospital. Considerations regarding whether the hospital will be eligible for Medicare outlier payments and the total payment the hospital can receive form the entire procedure.

Case study Application

          The DRG Description Case weight refers to the diagnostic related group (DRG), which classifies hospital inpatient cases for Medicare services. Specifically, DRGs classify all human diseases based on the ‘affected organ system, surgical, procedures performed on patients, morbidity, and sex of the patient’ (Gottlober, 2001, p 2). This classification taken in to consideration an additional eight primary diagnoses along with six procedures performed during Mr. Smith’s hospitalizations. Consequently when a weight is assigned to Mrs. Smith’s procedures it shows the Medicare resources available to her when compared to other recipients with the same condition/ disease. The more intense the disease condition the greater is the weight (Gottlober, 2001).  

         Precisely, 115; Permanent Cardiac Pacemaker; 3.5513; 302 Kidney Transplant 4.1370 and 441 Hand Procedure/Surgery 0.8785 has a less weight than Kidney Transplant and they both have a stronger weight than Hand Procedure/ Surgery.115,302 and 441 are codes provided to each procedure, which indicate the cost ascription of each service. Calculations for each DRG are modified from time to time. However, in the standard methods charge for individual DRG is calculated by adding up all charges for cases within that particular DRG (Gapenski, 2009).

       After arriving at this figure that amount is divided by the number of classified cases contained in the DRG. Prior to this process, though, patient charges are standardized and the effects of regional area wage differences along with indirect medical education costs if the institution is a training hospital are removed. In this case The San Francisco General Hospital is not a teaching facility, but is located in a large urban geographic location. Also, additional payments to hospitals that treat a large percentage of low income patients are removed. (Gapenski, 2008).

      In applying the wage criteria to hospital costs, this accounts for the greatest care expenditure. Center for Medicare Service usually adjusts this cost according to the patient’s income level. Teaching institutions carry a higher cost which could escalate prices for patients even when bring more profit to the institution. There are three other conditions which can affect Mrs. Smith’s the overall cost. They include whether San Francisco General Hospital is located more than 35 miles in proximity to another hospital. Secondly, whether San Francisco General Hospital the only so inpatient hospital servicing that geographic location or if San Francisco General Hospital was designated “critical access hospital’’ by the Secretary (Blount & Waters, 2001).

           In relation to the Kidney Transplant the operating payment to be paid to the hospital requires a six step calculation. Step 1 is calculating the Standard rate; Step 2 Adjusting for the Wage Index Factor; Step 3. Adjusting for the DRG Weight; Step 4 Disproportionate Share Payment ; Step 5 Indirect Medical Education Payment and Step 6 Outlier Payments

Step 1 Calculating the Standard rate

A large Urban area is used because San Francisco General Hospital is located there

Labor related $22,809.18 Non-labor related $10,141.85

Step 2 Adjusting for the Wage Index Factor

$22,809.18 x 1.4193 = $3987.07 (adjusted labor rate for San Francisco) $34,987.07 + $21,141.85= $55,128.92 — Generic Hospital’s Adjusted Base Rate

Step 3 Adjusting for the DRG Weight

Based on the codes

($33,987.07 + $21,141.85) x (1.8128) = $91,297.71

Step 4 Disproportionate Share Payment

This rate is 0.1413. Generic’s base payment rate is multiplied by this rate. ($91,297.71) x (1+ 0.1413) = $100,611.47

Step 5 Indirect Medical Education Payment

The adjustment factor for Indirect Medical Education is 0.0744. This rate is added to the DSH factor plus 1 to give the Hospital an adjustment rate of: 1 + 0.1413 + 0.0744 = 1.2157. The payment the hospital can expect to receive for this case is: $9,297.71 x 1.2157 = $11,303.23

Step 6 Outlier Payments

$150,000 If  Mrs. Smith/’s cost of care exceeded the payment rate by $14,050, the hospital can apply for Outlier Payments

(Blount & Waters, 2001).


What is the operating payment to be paid to the hospital?

This is calculated applying the following formula

DRG Relative Weight x ((Labor Related Large Urban Standardized Amount x Core-Based Statistical Area [CBSA] wage index) + (Nonlabor Related National Large Urban Standardized Amount x Cost of Living Adjustment)) x (1+ Indirect Medical Education + Disproportionate Share Hospital).

What is the capital payment to be paid to the hospital?

This is calculated using the following formula:-

(DRG Relative Rate x Federal Capital Rate x Large Urban Add-On x Geographic Cost Adjustment Factor x Cost of Living Adjustment) x (1+ Indirect Medical Education + Disproportionate Share Hospital)

Will the hospital be eligible for the Medicare outlier payment?  

No because Mrs. Smith’s care does not exceed the pay rate by $14,050,

What is the total payment to the hospital?



Blount, L. L., & Waters, J. ( 2001). Managing the Reimbursement Process. 3rd ed. Chicago:

                  AMA Press

Gapenski, L. (2009). Cases in Healthcare Finance (4th edition). Boston: McGraw Hill-Irwin

                               McGraw-Hill Irwin

Gapenski, L.C. (2008). Healthcare finance: an introduction to accounting and financial

                  management (4th ed.). Chicago, IL: Health Administration Press.

 Gottlober, P. (2001) Medicare Hospital Prospective Payment System: How DRG Rates Are

                       Calculated and Updated. Office of Inspector General Office of Evaluation and                         Inspections Region IX


Impact of Earth’s Atmosphere on Optical Telescope Observations

The importance of visibility when making observations with an optical telescope may be an obvious consideration, but it can also be easy to discount the challenges faced in acquiring a clear image. One of the most potent barriers to optical telescope viewing is the atmosphere of the Earth (Chaisson & McMillan, 2011). Any astronomical observation is subject to the effects of the atmosphere as radiation must pass through it before it can be registered by an Earth-bound telescope. The air acts as a lens that prevents light from reaching an optical telescope at a flat angle and therefore distorts the image from any source outside of the atmosphere. Some common atmospheric effects include blurring and variations in brightness known as twinkling. This problem has been a target of research for much of the existence of optical telescopes, though it took the relatively recent development of launching telescopes into space to escape the effect in totality.

            The deformation of measurable object diameters is the most devastating effect of the atmosphere on observations made through an optical telescope. Diameter has historically been a key variable in the identification of celestial objects and the atmosphere causes refractive distortions that threaten the ability to reliably record such values. The primary cause of atmospheric refraction is the presence of turbulence and the resultant mixing of air components. Turbulence causes sections of air to flow in a way that impacts adjacent streams, resulting in apparent ripples throughout the atmosphere. These alterations can result in the blurs and twinkles that are common problems when making observations with an optical telescope.

            The degree of distortion caused by atmospheric turbulence is variable based on location and time. Accordingly, optical telescopes have been consistently placed in areas that are thought to be the least impacted by the atmospheric factor, such as those with low humidity like deserts and mountain peaks. The latter example has two qualities that are helpful in this cause as high altitudes reduce the amount of atmosphere that light must pass through before reaching the telescope. Additional steps have been taken to minimize this issue in the form of complicated techniques and tools known as advanced optics. However, not even these innovations are immune to the effect on a full-time basis. The development of the Hubble telescope helped to circumvent the confounding influence of the atmosphere completely, though deploying instruments outside of the atmosphere is not a simple task and thus will not be a widely available solution for some time.

            The atmosphere has had such a dramatic impact on astronomic optical imagery that humans have been forced to abandon the planet in order to achieve a desirable quality of representational accuracy. Though space bound telescopes like the Hubble offer the best image, there are several other techniques that allow for a much improved reliability in optical telescope viewing. However, these tools are similarly expensive and difficult to implement. It is possible that the most efficient approaches to the atmospheric problem will come from interferometry and its associated concepts. Turbulence in the air is a form of interference because it is essentially a pressure wave that is interacting with electromagnetic waves. Instruments called interferometers have been developed from this perspective and address the issue by observing the waves from multiple reference points that can then be used to deduce and reduce the impact of interference on visible waveforms.


Chaisson, E., & McMillan, S. (2011). Astronomy: A beginner’s guide to the universe. (6th ed.).             Benjamin-Cummings Pub Co.


Interferometry and Problems in Radio Astronomy

The study of the universe would be extremely limited if the only available frame of reference was visible wavelengths. The waves that we interpret as visible colors only comprise a small portion of the entire radioactive spectrum. However, waves of all electromagnetic frequencies can potentially carry valuable information about the nature of the universe and observable reality. For example, radio frequencies have become useful in a manner of applications including serving as the foundation for an eponymous form of astronomy. Radio waves include those that oscillate at a rate of 3 kHz to 300 GHz and are plentiful throughout the observable universe. An assortment of celestial objects has been identified as sources of radio waves (Chaisson & McMillan, 2011) including stars, galaxies, quasars, and pulsars. The most famous source of radio emissions is the Big Bang and findings from radio astronomy helped to uncover the cosmic microwave background (CMB) which is theorized to have originated from the momentous event.

            Radio astronomy uses a variety of advanced tools and techniques to make telescopic observations of radio waves and their sources. However, despite continual advances in the field, much of the information gathered by these procedures would be lost if not for the application of radio interferometry. Interferometry is a group of techniques that employs the idea to reduce and ideally eliminate the effect of interference in readings of electromagnetic radiation. Interference occurs when waves interact to result in summations, subtractions, and other altered states that obscure the characteristics of the initial waveforms. This is a serious threat to the validity of electromagnetic data on every scale, especially that taken from distant sources as the waves have had more time and space to be altered by interference. Fortunately, the study of waves via interferometry can even account for complex interactions in most cases.

            The most important concept supporting the application of radio or other forms of interferometry is that of superposition, which refers to the combination of waveforms. The results of a vast number of wave interactions provide the framework for the application of interferometry as they allow for the identification and removal of changes to waves that have resulted in the form of those that are observed. Radio interferometry was developed in direct response to the difficulties that arose from excessive interference in radio wave observations that was otherwise only thought to be treatable by increasing the size of radio telescopes to unrealistic levels. Radio telescopes were difficult enough to use due to atmospheric resistance and the need to operate at high elevations. Tools known as radio interferometers were implemented to achieve this task by providing a number of reference points in the form of multiple radio telescopes. This system also provides increased signal volume and resolution, though it does require a large distance between telescopes and thus may face an expansion barrier over time.

            Radio astronomy has become a key part of many astronomical endeavors by providing information from a band of wavelengths that would otherwise remain unavailable for human observation. Like all forms of wave observation radio astronomy is prone to the effects of interference. The study of interferometry provides tools and methods to address this issue by examining the effects of superimposition on the observed waves. These influences can then be removed from the equation to reveal a closer approximation of the original waveform. Giant multi-telescope arrangements known as interferometers are used to collect the required information while advanced mathematic programs are used to determine and reduce the influence of interference on astronomical observations of the radio wave bandwidth.


Chaisson, E., & McMillan, S. (2011). Astronomy: A beginner’s guide to the universe. (6th ed.).             Benjamin-Cummings Pub Co.


Lunar and Solar Eclipses

Most people on the planet are aware that both the sun and moon can appear to be blocked out at certain times. In both forms the event is known as an eclipse and is a type of syzygy, which is a term that is used to describe the nearly straight-line arrangement of three bodies in a single gravitational system. Eclipses can occur in many kinds of gravitational configurations and are not limited to our observations from Earth, though the terms lunar and solar eclipse are almost universally understood to be in reference to our own moon and sun respectively. Perhaps somewhat surprisingly, the similarities between solar and lunar eclipses are virtually non-existent beyond these fundamental characteristics. There are many important differences between these observable occlusions of the sun and moon that demonstrate the characteristics that are unique to each event Chaisson & McMillan, 2011).

            A lunar eclipse occurs when the moon passes through the shadow of the Earth. The gravitational arrangement in this situation has the moon on the far side of the Earth in reference to the sun. The Earth casts a shadow that obscures the sunlight that would otherwise illuminate the moon for regular viewing. Only a relatively small and especially dark central portion of the shadow known as the umbra can cause a total eclipse, while the larger outer segments of the shadow form the penumbra which is not completely free of solar radiation. A partial lunar eclipse occurs when only part of the moon is in the umbra and a penumbral eclipse refers to the darkening of the body while in the penumbra. A total penumbral eclipse is possible when the moon is still completely in the outer shadow, though the side closest to the umbra can still appear to be darker than the rest. Lunar eclipses are the most common to be observed by humans. This prevalence is based on many factors including the availability of viewing access from the planet, as lunar eclipses can be seen from the entire nighttime side of the planet. Also, the total eclipse can last for nearly two hours dependent upon the positioning of the planet, with partial coverage being present for up to four.  

            Solar eclipses are a much rarer observation than their lunar counterparts. A major reason for this scarcity is the fact that the moon is responsible for the occlusion when the relatively tiny body passes between the sun and Earth. The moon casts a shadow in this situation and a total solar eclipse occurs when the umbra reaches the surface of the Earth. The total eclipse will then only be seen by those who are within the area of the Earth covered by the umbra, which lasts for only a short time in any given spot. Should the small area fail to reach the planet then viewers directly in line with the moon would experience an annular eclipse where the sun appears as a bright ring around the smaller circular darkness of the moon. An especially rare solar eclipse is the hybrid type that describes a situation where the eclipse may appear to be annular from one vantage point but total from others. A partial solar eclipse is the most common form to be viewed because it can occur in the penumbra during total eclipse events and it can be the only observable result of a total eclipse when the umbra passes beyond the Earth’s poles. One of the most commonly known differences between solar and lunar eclipses is that you can safely look directly at an occluded moon while doing the same for a solar eclipse could cause serious eye damage.

            It can be tempting to assume that eclipses should occur on a monthly basis due to the orbit of the moon around the Earth. However, these events are far rarer in reality because our planet is not orbiting the sun on the same plane as the moon orbits the Earth. Accordingly, the three bodies will not form a straight line every pass as the moon may be above or below the plane of reference, and an eclipse of either type will not occur on a monthly basis.


Chaisson, E., & McMillan, S. (2011). Astronomy: A beginner’s guide to the universe. (6th ed.).             Benjamin-Cummings Pub Co.


Discoveries of Galileo and Support for Copernicus

In the first half of the sixteenth century Nicolaus Copernicus developed the first thorough mathematical model of our solar system as a heliocentric arrangement. However, he was very aware of the resistance he would encounter from Church figures should his theory be published because it was in direct conflict with the traditional geocentric beliefs that are supported by scripture. Additionally, the science behind his theory was revolutionary for the period and Copernicus recognized that he would also be bombarded with criticisms from a variety of scholarly perspectives including philosophical inquiries. Despite his attempts to downplay the theory it would quickly spread by word throughout academic and related channels throughout Europe. Eventually Copernicus would consent to the publishing of his heliocentric model in his famous book De revolutionibus orbium coelestium. Copernicus would never have to face his fears of questioning as he died before it was released.

            Galileo Galilei became a champion of the Copernican heliocentric model in the early seventeenth century, coming to the defense of the theory as it had become subject to much opposition as the original author had predicted. Galileo found support for the theory through observations using a telescope (Chaisson & McMillan, 2011), one of the most advanced scientific instruments of the time and a luxury not had by his predecessor. The first discovery made by Galileo that favors the heliocentric model over geocentric was made shortly after the dawn of the seventeenth century. He observed that a star later identified as Kepler’s supernova moved in a manner that was incompatible with the theory of an immutable universe as held by the Aristotelian geocentric perspective. A few years later Galileo identified objects moving in line around Jupiter. He saw them disappear and reappear from behind the gas giant and laid the ground for the planet/moon model, a major astronomical discovery that was extremely supportive of the Copernican heliocentric system in comparison to geocentric theories by demonstrating that these objects were orbiting something other than the Earth.

            Further viewings of planets by Galileo gave even more evidence of deficiencies in non-heliocentric theories, though the astronomer may not have recognized much of it at the time. Saturn’s rings became a confusing mystery as they appeared to be orbiting moons from certain angles with some disappearing at apparently random intervals. Despite a lack of clarity about the nature of the objects, Saturn’s rings would come to represent another example of non-geocentric orbiting. Venus may have been the most significant of his planetary observations as it relates to support for the Copernican geocentric perspective. The existence of Venus’ perceived phases was in direct conflict with predictions from all forms of geocentric theories as well as other planetary models that were up for debate at the time. Accordingly, several transitional models emerged that combined both heliocentric and geocentric concepts to account for Galileo’s findings, each of which would eventually give way to purely heliocentric designs.

            The extent of Galileo’s various contributions to astronomy and physics through telescopic observations cannot be understated. However, one of the most prominent of his findings is that the Copernican heliocentric system is vastly superior to geocentric models in accounting for repeatable and testable observations of heavenly bodies. Even some of his more local investigations uncovered aspects in the moon phases and sunspots that further reduced the feasibility of any theory other than heliocentric being apt, while extremely distant stars and the dense clouds of the Milky Way presented a seemingly unending testing ground for future research that would also support the theory.


Chaisson, E., & McMillan, S. (2011). Astronomy: A beginner’s guide to the universe. (6th ed.).             Benjamin-Cummings Pub Co.


Environmental Issues

As the use of technology becomes more prevalent, our need to monitor human impact on the environment also increases. Although technological advances make our lives easier, it is essential to understand how production of these items negatively impacts our planet. Factories that produce these products produce a large quantity of pollution that ranges from carbon dioxide emissions to paper waste products that deplete both the ozone layer and Earth’s forests. Therefore, it’s essential to educate others on the harmful effects of factory pollution and find ways to reverse the impact of this waste on our environment. To do so, we must we must begin by educating our youth about the consequences of pollution and determine effective ways to reduce the impact of carbon dioxide in the air.

            Many people today don’t believe that global warming exists and is a threat to our environment. Despite this, there is clear evidence that the Earth’s temperatures are changing, seawater levels are rising, and the ozone layer is shrinking. In order to dispel any myths that exist against global warming, we must educate our nation’s youth while they are still in school. I therefore propose the addition of an environmental science course at the elementary level. In addition to learning basic math, language, and history, students will be required to learn the basic science skills that will enable them to interpret scientific evidence on their own and draw their own sound conclusions. Since youth are the future of this country, educating on them on these issues early can have a positive effect on environmental policy in the future.

            Next, we must find a way to reduce carbon dioxide emissions in the atmosphere. These emissions create holes in the ozone layer which decreases protection from the sun’s UV rays. To do so, we must petition and lobby against factories that produce a significant amount of pollution. Many will not want to comply, but it is important to explain to them that there are clean energy alternatives that will allow them to continue their business as before and have a smaller carbon footprint on the Earth. Lobbying against these factories may help enact a national policy that will require them to find ways to produce their technology and products in a way that will minimally impact the environment.            

In conclusion, although factories that produce technology are harmful to the environment, there are ways to persuade these companies to switch to clean energy that will help protect the Earth. These strategies include educating youth, lobbying against factories that cause pollution, and trying to persuade factories to use clean energy. While this may not be enough to completely undo the effects of global warming, it’s certainly a start. It is essential for everyone to understand the importance of the environment and how our daily activities can contribute to its demise; only then can we start to make a true difference.


Cultural Impact


Crack cocaine is a very addictive substance that had a unique beginning. Cocaine was initially extracted from the coca plant in 1862 and used in various medicines, some that could be purchased over the counter. It was even one of the common ingredients in the first Coca-Cola drink (Baumer, p. 312   ). Powder cocaine was first used recreationally by affluent member of society. It was very expensive to buy pure cocaine. Crack cocaine became a cheaper substitute for pure powder cocaine. Crack cocaine is produced by adding water and baking soda to pure cocaine. The substance is then baked and “cracked” into small pieces. This product produces an intense high, but only lasts about fifteen minutes. Crack cocaine became popular in the 1980’s and has had lasting negative effects on the black community. Black males are more likely to use crack cocaine than members of any other race. Crack cocaine has negatively affected the African-American community in several ways: crack cocaine usage increases sexual risk taking behaviors and violence among its users, users are at a higher risk of mental health issues, and chronic users develop health issues over time that could lead to heart attacks, strokes, and other gastronomical complications. Nonetheless, there are treatment plans geared toward helping African –American individuals overcome crack addiction. These programs range from out-patient to extended in-patient stays. They are operated by various organizations from hospitals to religious organizations.

Risky Behavior

Many crack addicts take part in risky sexual behaviors in order to fund their habits. When one thinks of prostitution or the exchange of sex for something else, one often thinks of women only. However, in the drug world men exchange sex for drugs just as often as women do. For example, one study conducted in an urban area found that both men and women engaged in trading oral sex for drugs or money; further, male respondents who acknowledged trading sex for drugs or money were more likely than women respondents to acknowledge having engaged in anal sex in trading for drugs ( Maranda, M.J., Han, C., & Rainone, G.A , p. 318) Also, more women reported using condoms than men, but also confirmed that if the customer insisted on not using a condom they would oblige. The study found that women often traded sex in efforts to gain access to more crack or to mentally escape the horrors of prostitution, while men reported heightened sexual urges when they were high on crack cocaine.  Maranda, M.J., Han, C., & Rainone, G.A reported, “Some women reported that they traded sex to support their drug addiction, others seemed to use drugs to cope with trading sex.” Consequently, the AIDS epidemic is growing due to the crack epidemic.  Maranda, M.J., Han, C., & Rainone, G.A adds that the best way to prevent the spread of HIV is to prevent behaviors that put people at risk. Using drugs often make people participate in risky behaviors in efforts to gain access to the drug.  (Maranda, M.J., Han, C., & Rainone p. 320)

Mental Illness

Crack cocaine use has also has been linked to onset mental issues.  Chronic crack use has been reported to produce side effects such as anxiety, paranoia, egocentric behavior, dysphoria, anorexia, and delusions. According to Baumer,

“Different routes of using cocaine are associated with different negative consequences. Crack users have a greater number of symptoms, and higher levels of anxiety, depression, paranoid ideation, and psychoticism. Psychiatric comorbidity among cocaine dependent users is not only increased for other substance disorders, but also for personality disorders.” (Baumer, p. 319).

Years of chronic use has been linked to more serious mental illnesses like schizophrenia. Scientists believe that because crack alters brain activity the imbalance can lead to the disease. Crack use blocks certain neurotransmitters and substances that allow brain cells to communicate with each other. The brains of people with the schizophrenia have less gray matter and some areas of the brain display less or more activity, just as the brains of crack users.  Baumer adds that using drugs also adds to the probability that a person will be violent. When persons with mental illness or drug dependency become violent, it is usually directed towards a family member (Baumer, p. 321)

Effects on Black Community

            Although crack cocaine is use by people from various races, it attacked the black community the hardest. Cocaine use has been linked to the increases in murder and incarcerations. High school drop-out rates have also increased since the introduction of crack cocaine. It is estimated that crack markets account for between 40-73 percent of drop in black males’ high school graduation rate. In essence, the introduction of crack cocaine to the black community did three things: increased the probability of a black being murdered, increased risk of incarceration, and increased the likelihood of selling crack as a potential income in the black home. All of the scenarios limit the benefits of a proper education. Consequently, high school seems less attractive to the black because he/she will only end up in jail, or he/or she could be earning some fast money by selling it.

Incidence and Prevalence

            According to the data from the National Survey on Drug Use and Health (NSDUH), young black adults  aged 18 to 25 years of age represent the highest rates of lifetime (60.5%), past year (34.6%) and past month (20.3%) use of crack cocaine. In 2007, 17 percent of state inmates and 18 percent of federal inmate’s admitted that they commit their crimes while high or in efforts to get money for drugs.  Also, 60 percent of prisoners polled reported prior drug use, while 79 percent reportedly were still using drugs. Sadly, nearly 75 percent of addicts that enter a recovery program will relapse. In 2008, 18.8 percent of blacks reported using crack cocaine, while the national average of crack use in 19.9 percent. Astoundingly, blacks only make up about 11.3 percent of the U.S. population.

Treatment Options

            Implementing an intervention program can be difficult; it depends on each individual addict. If the addict has family, the family needs to be equipped with the tools to help the addict. However, if the addict is alone, he needs an appointed support system to help him maintain his sobriety. Financial difficulty is the main obstacle that addicts deal with. Most of them do not have insurance or the funds to pay for a treatment program. Often non-profit organizations may be willing to help out financially. There are some free programs that addicts may enter into, but these programs are often overcrowded and cold take as long as a year waiting period before being admitted. Some addicts chose to go through a detox program and then continue with an out-patient facility. With out-patient, the addict is able to go home daily, but is required to attend certain meetings. With in-patient facilities the addict is required to remain in the facility for the duration of the program. Often many addicts are more successful with in-patient facilities. Sadly, many of them relapse after they complete the program and are back in their old environments. Consequently, many therapists suggest that the addict moves to another location in order to have a better chance at remaining sober.  Consequently, the program has to be designed based upon the needs of the addict and his/her family.

                                     Specific Treatment for African-Americans

A range of treatment programs have been developed in the last few decades that endeavor to address the issue of substance abuse within members of the African-American community. A 2006 report (Liddle et al) examined an intervention and treatment program developed within the larger context of family therapy that was designed to address the specific needs and concerns of adolescent African-American males in terms of substance abuse. This program aims to be “culturally specific,” and is based on a several core components intended to provide a therapeutic framework that both considers and derives utility from a number of cultural references and touchstones relevant to the African-American community in general and adolescent African-American males in particular.

            The therapeutic framework is predicated on the notion that African-American adolescent males live in an “intersection” of cross-cultural frameworks (Liddle et al, 2006). These include the overarching mainstream American culture, the American minority culture, and the African-American-specific culture. As such the therapeutic framework utilizes culturally-relevant components, such as music and movies, which address issues related to the specific issue of substance abuse as well as larger issues about family, inner-city life, and other cultural components that may be relevant to African-American male adolescents. This culturally-specific therapeutic framework is intended to promote a strong level of engagement and participation among subjects, as opposed to a top-down model of information dissemination.

            The culturally-specific therapy is considered to be an adjunct of the larger model of Multi-Dimensional Family Therapy (MDFT), and it attempts to address the “oppositional culture” and the “code of the street” in which young African-Americans are often raised. By developing a treatment program which embraces these cultural components –rather than attempting to subvert the or simply ignore them- the treatment approach seeks to draw out positive cultural references that support the avoidance of substance abuse and to emphasize and reinforce such references as a means of promoting abstinence and avoidance of drug and alcohol use. In short, the MDFT approach attempts to make it “cool” to not use drugs and alcohol by promoting this view through the use of culturally-specific references that are likely to be acknowledged and accepted by subjects. The report asserts that this culturally-specific MDFT approach shows strong promise as an effective approach to helping young African-American males develop and individual identity that aligns well with their larger cultural frameworks and promotes the choice not to use drugs and alcohol.


            Crack addiction is just as much an emotional and psychological addiction as it is a physical addiction. Many crack addicts are afraid of being without the drug.  Crack addicts are often dual drug users, which mean their crack addiction is often brought on by the use of some other illicit drug. As a result, healthcare providers must address the problem in a dual method. They must first address the psychological issues that the addict may be dealing with. Once that is done, they can give the addict tools to use to handle stress and other life issues without turning to crack as a coping mechanism. Most African Americans are afraid of being labeled as mentally ill or a “crack head”. As a result, healthcare providers have to careful to treat them with respect in order to gain their much needed trust. Health care providers and mental health workers must collaborate, which will bring expertise from both backgrounds to the forefront in order to help crack addicts gain and maintain sobriety. Most importantly, health care providers must convey that they understand that crack addicts are essentially just like any other person. They have just made bad decisions, but have the potential to become productive members on society once again.


Baumer, Eric P. (1994).  Poverty, Crack, and Crime: A Cross-City Analysis. Journal of Research in Crime and Delinquency, 31. 311-327.

 Liddle, H. A., Jackson-Gilfort, A., & Marvel, F. A. (2006). An empirically-supported and culturally-specific engagement and intervention strategy for African-American adolescent males. American Journal of Orthopsychiatry75(2), 215-225.

Maranda, M.J., Han, C., & Rainone, G.A. (2004). Crack cocaine and sex. Journal of Psychoactive Drugs, 36, 315-322.

Substance Abuse and Mental Health Services Administration (2012). Results from the 2011 National Survey on Drug Use and Health: Summary of National Findings. Retrieved from


The King Tut Curse

The stories surrounding the curse on King Tutankhamen’s tomb began during the spring of 1923 when Lord Carnarvon, the person who was responsible for funding the tomb’s excavation, died shortly after its discovery. Lord Carnarvon’s misfortunes began when a mosquito bit him on the cheek and he accidentally aggravated the spot by shaving; shortly after this mishap, the wound became infected and he fell sick. His symptoms became extremely exaggerated and although a doctor was sent to help cure his fever, Lord Carnarvon died before help arrived. According to observers, all of the lights in Cairo went out at the exact moment he died (KingTutOne, n.d.).

            It’s important to understand that before this situation even occurred, there were a multitude of stories handed down throughout history about how mummies had magic powers. As a consequence of the pre-existing legends, it is likely that the researchers who approached the tomb had been gossiping amongst themselves about the fears and wonders they had about uncovering this burial site. When Lord Carnarvon fell ill, it made sense to explain his death in terms of the curse. While it is nearly impossible to tell what he really died from, we do know that his death began a series of rumors that increased the legacy of the curse of the mummy.

According to legend, anyone who dared to disturb a mummy’s grave would suffer a series of mishaps that are the mummy’s way of protecting its resting place. Before the discovery of King Tut’s grave, there were no known Egyptian tombs that remained untouched by grave robbers. Therefore, this dig had a certain stigma because this team would be the first to disturb a grave of a mummy that had been resting peacefully. After Lord Carnarvon mysteriously died, the media was drawn to this case which contributed to the perpetuation of the myth. It was they who claimed that King Tut enacted vengeance upon Lord Carnarvon and whoever entered the tomb would suffer the consequences. In addition to this, the media spread two more rumors; the first is that a cobra killed Howard Carter, who discovered King Tut’s tombm

OThey claimed King Tut wanted vengeance and announced a mummy’s curse, which targeted those who had entered the tomb. Not only did the death of Carnarvon get all the people in an uproar but other stories began to surface as well. Of the stories that surfaced, two remain prominent. One of the prominent stories is that a cobra killed Howard Carter’s (explorer who discovered King Tut’s burial place) and the other is the Carnavron’s dog suddenly dropped dead at the same moment his owner did.

Despite the myth that Howard Carter was killed by a snake, it has been documented that he lived for ten years after the discovery of King Tut’s tomb. During this time he researched the artifacts found in the tomb and compiled an extensive database of the history of the items, their relationship to the king, and successfully connected the role of King Tut’s importance in Egyptian history. Skeptics note that Howard Carter was the first person to enter the tomb and spent a lot of time in the tomb thereafter this initial entry; therefore, if the curse of the mummy is true, why didn’t Carter die around the same time as Lord Carnavron?

Many scientists and researchers are currently interested in determining why Lord Carnavron actually died and what could be contained within King Tut’s tomb that caused this. The most realistic explanation for the myth is that the story is so old and told by so many people that we don’t know the actual facts of what happened when Lord Carnavron died. In addition, there is no documentation on whether his dog also died, why the lights in Cairo went out, if the lights in Cairo actually went out at the same exact time that Lord Carnavron died, and whether a snake killed Carter or he died for some other reason. Overall, belief in this myth depends upon whether a person believes there is compelling evidence to prove the scenario that has been proposed.

            According to a May 2005 National Geographic article entitled “Egypt’s “King Tut Curse” Caused by Tomb Toxins?”, scientists believe that there may have substances contained within the tomb that caused the supposed “curse deaths” (Handwerk, 2005). The article states that tombs are teaming with meats, vegetables, and fruits that can support growth of insects, molts, and bacteria. Physicians have reported that two types of mold called Aspergillus niger and Aspergillus flavus are typically carried by ancient mummies; these can both cause allergic reactions that range from basic allergy symptoms to lung bleeding. The article continues to discuss several other kinds of pathogens and chemicals that could have caused Lord Carnarvon’s death. Other researchers believe that Lord Carnarvon was already ill before the expedition. These experts cite that he didn’t die until after a few months of exposure to the tomb; if the wrath of the mummy were true, one would likely expect this to happen instantly.

            Although many people still believe in the curse of the mummy, it is interesting to note that media continues to perpetuate this mystery. Even with modern science and technology, we are still not able to completely explain the mysteries surrounding King Tut’s tomb. It is likely that we will never have an explanation as to what really happened in 1923 and that we will continue to tell stories about this incident in the future. It’s true that unsolvable mysteries make the most interesting tales.


Handwerk, B. (2005). Egypt’s “King Tut Curse” Caused by Tomb Toxins? National Geographic.             Retrieved from    KingTutOne. (n.d.).

King Tut’s Curse. Retrieved from

Computer Science


Although hypervisors are useful because they allow different operating systems to share a single hardware host, there are several technical advantages and disadvantages of using a hypervisor in an enterprise. According to, hypervisors are what is responsible for making the Cloud possible. In addition, the advantages and disadvantages of hypervisors are dependent upon the types being used (Bredehoeft, 2012).

            Type 1 hypervisors are installed directly onto “bare-metal hardware” and they don’t require an additional operating system. There are several types of type 1 hypervisors and they include several names that are familiar even to people who don’t regularly deal with computers. These brands include VMware ESX and ESXi, Citrix Xen Server, Linux KVM, Microsoft Hyper-V, MokaFive, and XenClient. The advantages of using a type 1 hypervisor include installing on Bare-Metal Hardware so that the hypervisor is able to directly access the hardware, the system is thin so it is optimized to have a minimal footprint and enables us to give resources to the host, it is more difficult to compromise the system and therefore provides increased security, it is useful for testing and lab settings, it is capable of supporting more than one virtual machine on hardware, and hardware failures will not affect the operating system.

            There are also several disadvantages associated with using a type 1 hypervisor. Very large virtual machines are not supported (ex. 1000+ GB), it requires particular hardware components, there is a cost associated with the license or support, there is a bad console interface, and not every operating system can be run.

            Type 2 hypervisors are more of an application that can be installed on an operating system rather than directly on the bare-metal. Examples of type 2 hypervisors include parallels, VMware fusion and player, VMware workstation, VMware server, Microsoft Virtual PC, Virtual Box, Linux KVM, and Citrix Receiver. The advantages of using a type 2 hypervisor include their ability to run on a greater variety of hardware because the host operating system is controlling the hardware access, it has an easy to use interface, it can be run in a Windows operating system, it is good for lab testing and development, it allows the use of several operating systems within a single operating system, it creates an easier management paradigm for desktops which is useful for enterprises, it doesn’t have to provide hardware to every user so a company would be able to run their own, and data can be secured on the desktop.

            The disadvantages of type 2 hypervisors include decreased security because of interaction with the VM container and its ability to copy this for additional use, large overall footprint, the type 2 hypervisor must be installed in the host operating system which is straightforward but can sometimes become complex, there is a loss of centralized management, and lastly, type 2 cannot support as many VM’s as type 1 can.

            Microsoft (Hyper-V), VMware (ESXi), and Hitachi’s (Virtage) are commonly used hypervisors in enterprise. Hyper-V was formerly known as Windows Server Virtualization and allows for platform virtualization on x86-64 systems. The architecture of Hyper-V allows the isolation of virtual machines using a partition. At least one parent partition needs to be running Windows Server in order for the hypervisor to be able to access the hardware. Supported guest operating systems include Windows 7, Windows Server, Windows Vista, and Windows XP. This hypervisor has several disadvantages including the fact that it doesn’t support virtualized USB or COM ports, audio hardware is not virtualized by Hyper-V, optical drives virtualized in the guest VM are read only so it’s impossible to burn media to CDs and DVDs, there are reported issues with graphics performance on the host because the translation lookaside buffer is flushed frequently, Windows Server 2008 doesn’t support the maintenance of network connections and uninterrupted service during VM migration, there is a degraded performance for Windows XP users, link aggregation is only supported by drivers that support NIC teaming, and there is no support for home editions of Windows (Conger, 2012).

            VMware ESXi is a type 1 (bare metal embedded) hypervisor that is used for guest virtual servers that run directly on host server hardware. This hypervisor is unique because its placed on a compact storage device, which makes it different from VMware ESX. As previously stated, the VMware ESXi hypervisor’s architecture is built to run on bare metal. In addition, it uses its own kernel rather than a third party operating system. The VMware kernel connects to the internet using hardware, guest systems, and a service console. For VMware ESXi to be able to virtualize Windows 8 or Windows Server 2012, the hypervisor should be 5x or greater (VMware, 2004). There are several limitations of this system including infrastructure, performance, and network. The infrastructure requirements include RAM guest system maximum of 255 GB, a RAM guest system maximum of 1 TB, 32 hosts in a high availability cluster, 5 primary nodes in ESX Cluster high availability, 32 hosts in a distributed resourced scheduler cluster, 8 maximum processors per virtual machine, 160 maximum processors per host, 12 cores per processor, 320 virtual machines per host, and ESXi prior to version 5 will not suppor the latest Microsoft operating system Windows 8 and Windows 2012. The network limitations primarily involve the use of the Cisco Nexus 1000 distributed virtual switch and cause the following limitations: 64 ESX/ESXi hosts per VSM (Virtual Supervisor Module), 2048 virtual ethernet interfaces per VMWare vDS (virtual distributed switch), a maximum of 216 virtual interfaces per ESX/ESXi host, 2048 active VLAN’s (one to be used for communication between VEM’s and VSM), 2048 port-profiles, 32 physical NIC’s per ESX/ESXi (physical) host, 256 port-channels per VMWare vDS (virtual distributed switch), and a maximum of 8 port-channels per ESX/ESXi host (CISCO, n.d.). The performance limitations include an increase in the amount of work that the CPU has to perform in order to virtualize the hardware. According to VMware, the Virtual Machine Interface was developed to correct this issue using paravirtualization, although only a few operating systems support this program.

            According to storageservers, Hitachi offers the “world’s first server virtualization software capable of running on multiple instances” (storageservers, 2012). It has functions that will allow logical petitioning and is useful for multi-tenant style cloud computing. To initiate virtualization, the system integrates Kernel based Virtual Machine technology which runs on top of Hitachi Virtage and is a base for the Hitachi BladeSymphony Server Line. Since this product is relatively new, there aren’t many known disadvantages of the system so users will be selecting to use this hypervisor at their own risk. While the older hypervisors have many known limitations, there are some known ways to enhance these systems; when considering Hitachi Virtage, buyers should be aware that this is not the case. Despite this, the article argues that this hypervisor is useful for reducing the total cost of ownership because it uses server virtualization technology. The article also states that this hypervisor will have a high level or hardware stability and it is compatible with high level Linux systems. It is expected that as this system is used, it will continue to be upgraded and made to be more user friendly and meet the needs of the consumer.

            Certain use of hypervisors could lead to a decrease in total cost of ownership for enterprise. According to the TechTarget article “Time to consider multiple hypervisors?”, many companies are finding that using a single hypervisor isn’t enough for their data centers. Despite this, adding an additional hypervisor can add several risks, so it’s important to consider the advantages and disadvantages of using one hypervisor versus many (Bigelow, n.d.). One of the major reasons that the article argues for using a second hypervisor involves a decrease in the total cost of ownership (TCO) of an enterprise.  In 2010, TechTarget surveyed a series of information technology professionals about their hypervisor choices. They found that cost efficiency was a big issue for a majority of experts, and this drove their decisions; this was mainly a factor when participants wanted to consider an alternative to VMware virtualization. The reasoning behind this involved their need for more features and functionality, want improved interoperability, and desire to avoid a vendor lock-in; although VMware is a good hypervisor, it is expensive and the lock-in would lead to consistent purchasing of only VMware brand products which is a disadvantage if budgets are tight.

            Information technology professionals argue that the cost involved in hypervisor selection isn’t due to the actual hypervisor that is chosen and rather is involved with the virtualization management strategy. While companies that run uniform x86-based servers will be able to get by using a single hypervisor for a good price, they must keep in mind that it will not run as well or offer all of the features on the mainframe, RISC, or SPARC-based servers. Therefore, there is a need to develop a better hypervisor for these systems. To solve this issue, it is useful to use a single hypervisor for server visualization and a separate one for desktop virtualization. The article also notes that organizations may want yet another hypervisor to support their private clouds. In addition, utilizing several hypervisors can be cost effective depending on the company’s technological evolution and acquisition of these systems. For example, a company may start out with a basic hypervisor to suit their needs. As the company evolves and needs a hypervisor that can support different capabilities, they should switch the old hypervisor to a new function instead of replacing it altogether. Now the company will be running two hypervisors and be able to virtualize information more efficiently without wasting any money.

            Considering the direct cost of hypervisors is an additional issue in an enterprise. Most hypervisors are free to try and some even come with the Windows Server operating system. However, it is important to understand that acquiring a hypervisor costs more than just obtaining the software license. The cost comes into play mainly when considering the features that come along with the hypervisors in addition to the management tools needed to keep track of the virtual data center. In addition, running several hypervisors increases cost because more IT resources will be needed to support and maintain these platforms. As a consequence, enterprises that considers a second hypervisor need to carefully think about the features and capabilities of their second hypervisor. If the second hypervisor leads to more efficient or better performance in terms of computing, this will save the company money overall because they will be able to perform their basic job functions more accurately.

            Unfortunately, using a single hypervisor or multiple ones is not a one size fits all cost-saver across the board. Companies that use more data and have many systems will likely benefit from using multiple hypervisors while smaller companies that don’t require a lot of data will benefit from using only a single hypervisor. When considering total cost of ownership for hypervisors, it is essential that an enterprise takes all of its computing needs into consideration before making the decision. Additional costs that companies will have to consider before budgeting for one of many hypervisors is the cost of replacement for when they will need to purchase a new hypervisor to completely replace an existing one. They should calculate the costs that business disruptions will cause in addition to considering that data may be lost or that overall performance may decrease depending upon their hypervisor selection and the physical installation process.

            The implementation of hypervisors has a clear impact on system administration. If a company needs to maintain their current hypervisors, replace, or add a new one, the information technology department of their company will need to remain highly involved for several reasons. First of all, the system administration should review the choices they have for hypervisors and pick the ones most suitable for their company’s needs. It is unlikely that any other department would fully understand the technical implications of the hypervisor; depending on which one is selected, there may need to be changes in the operating systems that the company’s network is using in addition to installation of software that would be compatible with the hypervisor. In addition, even if the IT department isn’t responsible for physically installing the hypervisor, they need to be made fully aware of how it was installed and what to do to troubleshoot and optimize the system should the need arise. Lastly, the IT department would be responsible for deciding whether the company would be better off using a single hypervisor, multiple hypervisors, and which kinds.

            Since the system administration is generally responsible for computer and network safety, they should be especially concerned with using hypervisors. Although they can increase efficiency, they could potentially provide a security threat; therefore, additional system administration is essential in order to ensure that all employees know how to protect their company’s computers against these threats. Malware and rootkits can take advantage of hypervisor technology by installing themselves below the operating system which makes them more difficult to detect. In this situation, the malware is able to intercept any of the operations of the operating system without anti-malware software being able to detect it. While some information technology professionals claim that there are ways to detect the malware using a hypervisor-based rootkit, this issue is still up for debate (Wang et al., 2009). This technology is relatively new and we cannot truly be certain that these security issues can be removed.

            A second reason why the systems administration would require additional training would be due to the use of x86 architecture that is typically used in PC systems. Virtualization is generally difficult on this type of system and requires a complex hypervisor; to solve this issue, CPU vendors have added virtualization assistance to their hypervisors such as Intel products and AMD’s. These provide support to the hypervisor which allow it to work more efficiently. Other solutions to this issue include modification of the guest operating system to make system calls to the hypervisor using paravirtualization and the use of Hyper-V to boost performance. It is essential for the information technology staff to be aware of the price hypervisor their company is using in addition to any potential modifications that can be associated with it. Therefore, additional training and troubleshooting to resolve matters related to their hypervisors would be useful.


Bigelow, SJ. (n.d.). Time to consider multiple hypervisors?. SearchDataCenters. Retrieved from   

Bredehoft, J. (2012). Hypervisors – Type 1/Type 2. Retrieved from

CISCO. (n.d.). Cisco Nexus 1000V Series Switches Data Sheet. Retrieved from        492971.html

Conger, J. (2012). Video: Microsoft Hyper-V Shared Nothing Live Migration. Jason Conger     Blog. Retrieved from             migration-on-windows-server-2012.aspx

Storageservers. (2012). Hitachi offers world’s first Server Virtualization software capable of     running on multiple instances. Retrieved from     virtualization-software-capable-of-running-on-multiple-instances/

VMware. (2004). Support for 64-bit Computing. Retrieved from

Zhi W, Xuxian J, Weidong C, Peng N. (2009). Countering Kernel Rootkits with Lightweight     Hook Protection. Microsoft/North Carolina State University. Retrieved from