Categories
Computer Science

Managing Contention for Shared Resources on Multicore Processors

1. In order to share several hardware structures, the advanced multicore systems are planned that enables clusters. For example LLCs (last-level caches such as L2 or L3), memory controllers, interconnects and prefetching hardware. In addition, the resource-sharing is advised as the clusters work as memory domains. The shared resources act as a memory pyramid. The threads that are present on cores in the memory domains participate for shared resources in a contention-free atmosphere (Bischof, 2008). In order to understand the function of contention for shared resources and its effects on application and performance, an example is given below. There are four applications Soplex, Sphinx, Gamess, and Namd (Balasubramonian, Jouppi, & Muralimanohar, 2011). These applications are taken from SPEC (Standard Performance Evaluation Corporation) CPU 2006 benchmark suite. This runs parallel on an Intel Quad-Core Xeon system. On three altered schedules, a test would run on group of applications for number of times. Each time two different paring is made that shares memory domain. Moreover, within the same domain the three pairing permutations run these applications (FEDOROVA, BLAGODUROV, & ZHURAVLEV, 2010).

  • Soplex and Sphinx ran in a memory domain, while Gamess and Namd shared another memory domain.
  • Sphinx was paired with Gamess, while Soplex shared a domain with Namd.
  • Sphinx was paired with Namd, while Soplex ran in the same domain with Gamess.

2. The main purpose is the captivity of essence of memory-reuse profiles. This can be done in a very simple metric. After metric was discovered an approximate way is determined while utilizing information present in a thread scheduler. In addition, the memory-reuse profiles are successful at modeling contention mainly due to implementation of two core qualities. These core qualities are sensitiveness and greatness. The sensitiveness of threads describes the issues a thread has to face while sharing cache along with other threads. The greatness in terms of thread sharing means that how much a thread damages other threads while sharing with cache. In fact, major information is captured by the sensitiveness and greatness of threads within memory-reuse profiles. In order to approximated the sensitiveness and greatness while using online performance of information we need to check their bases for modeling cache contention between threads. The information is derived from metrics in memory-reuse profiles in order to achieve sensitiveness and greatness (FEDOROVA, BLAGODUROV, & ZHURAVLEV, 2010).

3. While assessing new models for the cache contentions, following objectives are considered such as, the ability of models for generating contention-free thread schedules. A model is constructed that would help to search for top or to avoid worst schedules. Therefore, these models are assessed on the basis of constructed merit schedules. Pain metric is used to evaluate scheduler that finds top schedule. For instance, consider a system that contains two pairs of core sharing and two caches. Moreover, this model also works boundless with the additional cores per caches. In this scenario, we need to analyze and identify the finest schedule for all the four threads. Likewise, the scheduler will develop permutations of threads available within the system as, all the permutations are exclusive in terms of pairing with each other in memory domain. For example if there are four threads present named as A, B, C and D than the exclusive schedules formed are: (1) {(A,B), (C,D)}; (2) {(A,C), (B,D)}; and (3) {(A,D), (B,C)}. Here the notations (A, B) describe the co-schedulers of threads A and B in similar memory domains. The pain regarding each and every pair is calculated by the scheduler for every schedule such as, {(A, B), (C, D)}. Moreover, the Pain (A, B) and Pain (C, D) is calculated by scheduler with the help of equations that are present formerly. The calculated amount is considered as an initial Pain values that are determined via equations. The lowest values of Pain are considered as the ‘Estimated Best Schedule’ by Scheduler. This Pain metric is developed through actual memory-reuse profiles. This pain metric helps to determine the best schedule. Another method for determining is via approximating the Pain metric using online data. After the best schedule value is determined, the performance is associated with the workload in order to obtain the ‘actual best schedule’ from the ‘estimated best schedule’. For this purpose, the ‘estimated best schedule’ is run over the real hardware and its performance is than associated with the ‘actual best schedule’. By running all the schedules on real hardware we can obtain the best value. Moreover, by running all the possible schedules on hardware may limit the workload as the large number of workloads are consuming at the same time. This is the most direct way to determining the ‘actual best schedule’ (FEDOROVA, BLAGODUROV, & ZHURAVLEV, 2010).

4. The reason of contention in multicore systems is discussed in this answer. A number of experiments are performed in order to determine the main cause of contention on multicore systems. Initially, the degree of contention is measured, separated and divided into several kinds of shared resources for instance, Cache, Memory Controller, Front-side bus and Prefetching Hardware (Das, Thankachan, & Debnath, 2011). The detailed arrangement related to the experiments is defined in this study. At first stage we need to focus on the contention for the shared cache. This removes the cache while competing with the threads from cache lines. However, this is not considered as the main reason for performance degradation in the multicore systems. The main reason that is involved in performance degradation in shared resources includes

  • Front-side bus
  • Perfecting resources
  • Memory controller

It is very difficult to describe the limitations and there effects on contention for prefetching hardware. The influence of prefetching reveals the combined effect of all the three factors and hardware itself. In this experiment the longstanding of memory-reuse model that is constructed for model cache contention are not effective. Furthermore, the model contention other than shared cache if applied in a real system did not prove efficient performance in the presence of other kinds of contention. On the contrary, the cache-miss rate is an outstanding analyst for contention for the memory controller, Prefetching hardware and front-side bus (FEDOROVA, BLAGODUROV, & ZHURAVLEV, 2010). Along with the Milc application, each and every application is co-schedule in order to create contention. The memory controller along with front-side bus would engaged by the cache missing applications. This would not damage the applications that are present in hardware. The high LLC miss rate in an application will destructively use prefetching hardware. This is due to the cache that is requested by the data counted via cache misses. Thus, for heavy usage regarding prefetching hardware a high miss rate is an excellent indicator (FEDOROVA, BLAGODUROV, & ZHURAVLEV, 2010).

References

Balasubramonian, R., Jouppi, N. P., & Muralimanohar, N. (2011). Multi-core cache hierarchies Morgan & Claypool.

Bischof, C. (2008). Parallel computing: Architectures, algorithms, and applications Ios PressInc.

Das, V. V., Thankachan, N., & Debnath, N. C. (2011). Advances in power electronics and instrumentation engineering: Second international conference, PEIE 2011, nagpur, maharashtra, india, april 21-22, 2011. proceedings Springer.

FEDOROVA, A., BLAGODUROV, S., & ZHURAVLEV, S. (2010). Managing contention for shared resources on multicore processors. Communications of the ACM, 53(2), 49-57.

Categories
Computer Science

General Security Policy

  1. Scope

This policy is applicable to all information resources, systems that are internally connected, employees and third parties who have access to the organization wireless network. The scope of this policy will also cover all the legacy and future equipment that will be configured and tuned as per the reference documentation (Barman, 2002).

Categories
Computer Science

Security Product Web Sites

There are several security product websites in the webisphere that offer products for individuals or companies for the purpose of internet security. As a Wireless Security Professional, (WS) the responsibility is to wade through all the websites and find the right products that fit the company. The purpose of this paper is to evaluate the websites of Interlogix and Websense Security Products, comparing their strengths and weaknesses, their products, and areas where they need improvement.

Categories
Computer Science

Google Chrome: The Upside of Process

Introduction

Despite many consumer preference changes and the rise in developing technology, Google has remained a leader in information technology, particularly in the field of software development. Apart from its search engine, it has many high-functioning platforms, one of which is Google Chrome, its own internet browser. Google uses the ‘Agile’ process in development of its web browser: an incremental, modifiable platform from which all applications are linked. The process model that is used to make this web browser function, the application of the system and its software development, as well as the suitability of such a model for this corporation is discussed below.

Categories
Computer Science

Analytical Agent Technology for Complex and Dynamically Evolving Environments

Abstract

The importance of simulation environments stretches throughout the academic and corporate hemisphere as they have dealings in not only aeronautics, electronics, and chemistry, but also biology, and physicist. All these different industries rely heavily on simulators in order to better understand and validate the numerous theories that help to explain the complex and dynamic systems. The purpose of this project is investigating of simulation agent-based paradigm for designing and developing frameworks that potentially focuses on monitoring of the systems. Within this research, the use of multi-agent simulators and multi modeling agents will offer a new perspective that can be used to address the evolving complex and dynamic systems that will differ from the explanation through mathematical equations. The techniques created will be devised in developing multi-agent simulators that have the ability to track the dynamic evolving system simulated.

Categories
Computer Science

Robust Security Networks

Robust Security Network (RSN) are exclusively for wireless networks, “that is a network protocol that is used for creating and establishing the secure communication connections and transmissions over the safe wireless networks such as 802.11.”(Riaz, n.d) Robust Security Networks are virtually known as WPA2, replaced the standard WEP security specification. Robust Security Network is broken down into five phases of operation, Discovery, Authentication, Key Generation and Distribution, Protected Data Transfer, and Connection Termination. Within the typical Robust Security Network, it operates a general internet protocol that wirelessly sends probe requests that are received by access points that respond back with the complete RSN information and IE frame information exchange. The wireless request can be in the forms of NIC that are authenticated by the approved technique installed where the access points will provide authentication, and exchange of information until the appropriate interface is received.

Categories
Computer Science

Agile Development Methodologies

Agile development methodologies (ADM) have been gaining popularity because it is increasingly becoming difficult to adopt static plan-driven methods due to constantly changing requirements of software projects as a result of both internal and external factors. ADM solves the dilemma by breaking software projects into components each of which can exists on its own and is fully compatible with the other components. As a result, any change or new requirement doesn’t disrupt the whole project or requires starting from the scratch. Each component or reiteration is not only shorter in length but also carries lower complexity and risk levels. There are four types of agile development methods which are extreme programming, crystal methods, scrum, and feature-driven development.

Categories
Computer Science

Wireless Security Action Plan

Comparison between Different Wireless Standards

802.11a

The 802.11a standard supports speed up to 54 Mbps. In addition, it utilizes the 5 GHz band. For transmission, the 802.11a standard uses the OFDM transmission technique. There are several different types of data rates that can be used by 802.11a standard such as 6, 9, 12, 18, 24, 36 and 48 Mbps.

Categories
Computer Science

Beyond Hardening

Wireless networks are now preferred instead of wired communications, as they are more convenient, portable and easy to use. However, wireless networks portray many security risks, as data is transmitted and received wirelessly. Likewise, these risks must be addressed for protecting the information assets and critical data. After hardening the wireless network by all means, three factors should be addressed i.e. confidentially, integrity and availability. For addressing these factors, organization need to conduct risk assessment in order to evaluate security controls placed for the wireless networks (Buttyán & Hubaux, 2008). Objective is to main the security for wireless networks via on ongoing maintenance process. Some of the factors that need to be addressed for maintaining wireless network security are given below:

Categories
Computer Science

How Will Astronomy Archives Survive the Data Tsunami?

How to Keep the Tsunami from Overcoming United States

Previously in the year 2011 on 14 May Green Bank West Virginia, the members present at the innovations Data-intensive Astronomy workshop realize that in order to manage huge data sets, a community is needed along with partnership with national cyber-infrastructure programs. Moreover, the archives need to be working on lower budgets. This can be done by providing better solutions, investigations and innovations related to new technologies. In this research, the issues related to the archives and their uses are addressed.

Categories
Computer Science

SMP and MSMP Architecture

Symmetric multiprocessing architecture (SMP) is considered to be a major substitute for commercial servers within a scope of certain scale for many upcoming years to come. The question arises here is how to construct scalable systems, as they needs to be reinitiated. A major limiting factor for SMP is not the capability; instead it is complex to construct an SMP with massive central processing units disbursed in large number of physical casings. Again a question arises here again i.e. how to substitute from SMP for a scalable corporate solution that will facilitate organizations to achieve competitive advantage. Likewise, the SMP technology will be in the market till its technological and economic advantages that can be implemented on more than one SMP. These SMP’s are called as Multiple SMP’s (MSMP). The MSMP is dissimilar from other scalable architectures/systems.

Categories
Computer Science

Linux Administration-Network Services and Funtionality

1         Linux Instructional Document

This document will reflect seven administrative tasks. Description along with screenshots will be added for the viewer’s understanding. Flavor of Linux named as Ubuntu 10.04 named as “Lucid Lynx” is used to perform the administrative tasks.

2         Task 1

2.1      Modification of the GRUB menu

For the modification of the GRUB menu, two files are used. The command which is executed to open the first file grub.cfg is “sudo gedit /boot/grub/grub.cfg”. After entering this command in the terminal window a popup screen will occur as shown in Figure 2.1.1 below

Figure 2.1.1

 Figure 2.1.2

The second file named ‘40_custom’ will be executed by the following command “sudo gedit /etc/grub.d/40_custom” in another terminal window as shown in figure 2.1.2

The highlighted commands will be copied to the 40_custom file as shown in the figure 2.1.3 and 2.1.4

Figure 2.1.3

The highlighted commands from figure 2.1.3 and 2.1.4 are coped to 40_custom file to make changes. Review figure 2.1.5 to get an idea below. Two operating systems are reflecting in this figure, Microsoft Windows XP professional and Ubuntu Linux. The titles of both the operating systems can be edited in this file. After making changes the ‘40_custom’file will be saved.

Figure 2.1.5

The default time out can be changed to 30 by modifying the values in the ‘grub.cfg’ file. It has been highlighted as shown in the Figure 2.1.6

Ubuntu does not have inittab file. Another alternative and easy method is implemented to change the default run level for Linux Ubuntu. The command executed for editing the file “/etc/init/rc-sysinit.conf”  and the default run level can be edited to ‘text only’ as shown in figure 2.1.7

Figure 2.1.7

2.2      Configuration of Dual Bootable system if necessary

Step 1: Open a terminal window in Linux by accessing “System,” >>> “Accessories,” >>> “Terminal.”

Step 2: Type the command “sudo grub” at the command prompt to start the Grub editing program.

Step 3: Specify Grub where the Ubuntu installation files are by entering the following command: “root (hdA,B)”. “hdA” A is the hard drive number and “B” is the partition where Ubuntu is installed. For example, Ubuntu installed in the second partition on the first hard drive, and then the command will be “root (hd0, 1)”.

Step 4: Grub loader will be reinstalled on the root drive by typing this command “setup (hdA)”, in which “hdA” is the drive in which Ubuntu is installed. It it is the first hard drive, the command will be “setup (hd0)”.

Step 5: Exit the Grub program by typing the command “quit” at the terminal window.

Step 6: Reboot system. When the system restarts, the Grub program will show the menu to choose between Ubuntu and Windows.

2.3      Run control level to text in the /etc/inittab file

Linux has certain run levels which defines the processes and services currently running on the system. There are different run levels in Linux retrieved from “www.improvehomelife.com” as:

Runlevel 1: Single user mode
Runlevel 2: Basic multi user mode without NFS
Runlevel 3: Full multi user mode (text based)
Runlevel 4: unused
Runlevel 5: Multi user mode with Graphical User Interface
Runlevel 6: Reboot System

For identifying the current run level the “who -command is executed as shown in figure 2.3.1

Figure2.3.1

For changing the default run level, configuration needs to be done in the file called “rc-sysinit.conf”. The commands which are edited are highlighted as shown in figure 2.3.2

Figure 2.3.2

3         Task 2

3.1      Configuring Cron Job

“Cron tab” is a self-initializing daemon. It automated process defined by the user to execute later. For running the archiving command every ten minutes, following command is executed in the terminal: “rsync -rougv –archive –delete-excluded –ignore-errors –exclude=*.gvfs* /home/wjju /homeback.tar/”

The command “*/10 * * * * /path/to/command name” is executed in the “crontab” menu. Figure 2.1.1 shows the privileges for a newly created user who can access the backup directory.

4         Task 3

Four users are created with the passwords. Two new files are created in each of the user home directory. The users are Barney, Fred, Wilma and Betty as shown in Figure 3.1.1 and 3.1.2

Figure 3.1.1

 Figure 3.1.2           

Finance and Marketing groups are created. Fred and Barney are placed in the marketing group and Wilma and Betty are placed in the finance group.

  • Campaigns – marketing group has full access, all other users are denied access.
  • Financials – finance group has full access, all other users are denied access.
  • Public – finance group has full access, all other users have read-only access.

As shown in figure 3.1.3 below:

Figure 3.1.3

5         Task 4

5.1      Name Daemon

In Microsoft windows name daemon is called as DNS (Domain Name server). Linux operates on name daemon to map IP addresses. Ubuntu consists of BIND (Berekley Internet Naming Domain). For installing BIND following command is used in the terminal window “sudo apt-get install bind”. BIND stores all the configuration files in /etc/bind directory. The configuration file for BIND is located at /etc/bind/named.conf. The db.root file which is located at /etc/bind/db.root contains the root name servers of the world. To start the Name Daemon following command is executed in the terminal window:

“Sudo /etc/init.d/bind start”

6         Task 5

6.1      File Transfer Protocol

(Bautts, Dawson, and Purdy) For Linux Ubuntu file transfer protocol service named “vsftpd” is configured via the terminal window. Command used for configuring “vsftpd” is “sudo apt-get install vsftpd”. For accessing the vsftpd.conf file. User will type “/etc/vsftpd.conf” in the terminal window. To disable anonymous access, the “anonymous_enable” settings are edited to NO and the “local enable” settings are edited as YES. After making changes to the vsftpd.conf the FTP server is restarted by the following command in the terminal window”sudo /etc/init.d/vsftpd restart

7         Task 6

7.1      Linux Email Server

Linux Email server configuration is a challenging process. It contains the configuration of several processes associated with the Email server configuration. Each process is separately configured to ease the complex configuration.

7.1.1     MTA (Mail Transfer Agents)

MTA are agents used for sending and receiving email from the server. They play a key role in typical Email server settings. Ubuntu comprises of Postfix by default as a mail transfer agent. Exim 4 is also supported, having its presence to the main repository.

7.1.2     Delivery Agents

For download emails from a POP (Post office protocol) enables email accounts, POP and IMAP (Internet Message Access Protocol) needs to be configured on the Mail server.

7.1.3     Mail Filter

Every mail server must have features to support the security issues. User can filtration based on custom rules as per the requirement in order to detect viruses and spams.

7.1.4     Web Email Access

“Squirrel Mail” and “Open web email” services can be configured for accessing the web email feature in Linux.

8         Task 7

8.1      Web Server Configuration in Linux

Apache is a popular web server for Linux operations systems. Web server process the request coming from a web server and retrieves the requested web pages to the browsers. The apache web server does not come by default in Linux. A command is executed in the terminal window which is “sudo apt-get install apache2”. For checking whether the installation has been performed correctly, 127.0.0.1 loopback IP address is entered in the Mozilla Firefox browser. If the apache web server is installed correctly, the message will retrieve “it works”

8.1.1     Web server Configuration

Domain Name configuration is performed in the “/etc/apache2” directory (Collings and Wall). There are two sub directories available in this directory. Sites available and sites enabled. The sub directory named sites available will be used as a template for configuring own site. The command for this will be “sudo cp default mywesbite.com”

An administrative email address is defined here for contacting to the web master.

 

Mywebsite.com reflects the following commands when opened by the file editor

#NameVirtualHost *

ServerAdmin [email protected]

Defines where the web site files are going to be located.

DocumentRoot /var/www/ mywesbite.com

<Directory />

Options FollowSymLinks

ServerName and ServerAlias directives need to be defined so that the web server knows which virtual host this configuration file refers to:

AllowOverride None

ServerName mywesbite.com

ServerAlias mywesbite.com

</Directory>

<Directory /var/www/>

Options Indexes FollowSymLinks MultiViews

AllowOverride None

Order allow,deny

allow from all

# This directive allows us to have apache2’s default start page

# in /apache2-default/, but still have / go to the right place

#RedirectMatch ^/$ /apache2-default/

</Directory>

After the configuration a copy of the /etc/apache2/sites-available/ mywesbite.com configuration file is created by executing this command in the terminal window “/etc/apache2/sites-enabled” By creating a symbolic link from the file in sites-available it has been achieved:

“Cd/etc/apache2/sites-enabled”

“ln –s /etc/apache2/sites-available/mywesbite.com”

8.1.2     Index File Creation

A directory is created for placing index.html file in it with the command:

“/var/www/mywesbite.com”

The last step is to restart the Apache web server with the following command:

“Sudo /etc/init.d/apache2 restart”

References

Bautts, T., T. Dawson, and G. Purdy. Linux Network Administrator’s Guide. O’Reilly Media, 2005. Print.

“Computers: Linux Runlevels.” : Welcome to ImproveHomeLife.com. Web. 14 Sept. 2013. <http://improvehomelife.com/Linux_Runlevels-7745.htm>

Collings, T., and K. Wall. Red Hat Linux Networking and System Administration. Wiley, 2006. Print.

Categories
Computer Science

Network Upgrade

1         Hardware Components

Disks Drives

The data and other files are stored in disk drives. The transferable disks that can transfer data from one work station to another are called as “Floppy disks”. These Floppy Disks comes in two sizes such as 31/2 and 51/4 diameter. Moreover, they help to store data. In fact, the data can be written into disk and can easily be read from the disk. Floppy disks are easy to use as the disk can be inserted in the given disk drives. After inserting the disk a light present on the disk drive indicates that the data is being accessed. Furthermore, it is important to never remove or insert the disk while light is still blinking. This may cause damage or loss to the disk as well as, the data present in the disk.

Categories
Computer Science

Server Configuration and Functionality

1. Linux Today

Today Linux is operated by millions of organizations, developers, and desktop users. The success story of Linux is the immaculate fundamentals of design, powerful, inexpensive, stable, open source, in expensive hardware, and integrates nicely. According to (“Nasa Goddard Space Center Selects Linux Networx Supersystem.” 8-8), The National Aeronautical Space agency of USA has selected Linux for computational sciences. The system is implemented to increase the throughput of the applications related to weather and climate variability and simulating astrophysics phenomena. Linux has now started to contribute to the cell phone industry. Motorola launched the modern mobile Linux development platform named “Motomagx”. This platform will facilitate the developers to build portable applications. Senior Vice President of Motorola, Mr. Alan Mutricy said “We know that software is just as important as hardware, through the introduction of our Motomagx platform, we are reinforcing our firm commitment to Linux,” (Wilson 16)