Friday, January 15, 2010

 January 16, 2010

Compute the the average turnaround time of the following process scheduling algorithm.

 Arrival Time: 0 1 2 3 4 5 6 7 8 9 10

Job Name:A B C D E F G H I J K 

CPU Cycle: 5 2 8 4 3 1 2 9 7 3 4

a. First Come First Come First Serve Algorithm

JOB

A

JOB

B

JOB

JOB

D

JOB

D

JOB

E

JOB

F

JOB

G

JOB

H

JOB

I

JOB

K

0-5

7

15

19

27

23

25

34

41

44

48

ATT=(5+7+15+19+22+23+25+34+41+44+48)/11=25.73

b. Shortest Job Next

JOB

F

JOB

B

JOB

G

JOB

E

JOB

J          

JOB

D

JOB

K

JOB

A

JOB

I

JOB

C

JOB

H

0-1

3

5

8

11

15

19

24

31

39

48

ATT=(1+3+5+8+11+15+19+24+31+39+48)/11=18.55

c. Shortest Remaining Time

JOB

A

JOB

B

JOB

C

JOB

D

JOB

F

JOB

G

JOB

E

JOB

H

JOB

I

JOB

J

JOB

A

JOB

K

0-3

3

4

8

9

11

13

14

15

18

22

26

ATT=(22+2+2+5+9+5+40+24+9+16)/11=12.64

 d. Round Robin 

JOB

A

JOB

B

JOB

C

JOB

D

JOB

E

JOB

F

JOB

G

JOB

H

JOB

I

JOB

J

JOB

K

JOB

A

0-2

 4

 10

11 

13 

15 

17 

19 

21 

23 

JOB

J

JOB

K

JOB

A

JOB

C

JOB

H

JOB

I

JOB

C

JOB

H

JOB

I

JOB

H

 

 

33

35

36

38

40

42

44

46

47

48

 

 

 ATT=(36+34+42+24+6+7+41+39+24+25)/11=24.64

 

Saturday, January 9, 2010

Prelim Exam

Preliminary Exam

IT132

Jan. 9, 2010

 

I. Concept Questions

 

1.       Name the five key concepts about an operating system that you think the user needs to know and understand.

 

Operating system starts with the structure of the OS. Conceptually, an operating system can be broken down into the kernel, shell and system utilities. The kernel is the core of the operating system, and in some cases, the distinction between the kernel and the shell is clear; in others, the distinction is only conceptual. A process is a program in execution. New processes are spawned when needed and terminated to free memory and other resources. Files are a method of organizing the information on disk. Files are usually organized in an inverted tree-like structure. A system call is a program that requests something from the operating system.

 

The roles of an operating system vary with the hardware and user programs it was designed to work with. The earliest operating systems were designed to help applications interact with hardware. This has grown to the point that the operating system now defines the machine. The operating system provides an interface to the underlying hardware, the application programs, and the user.

The operating system manages

 -Hardware

 -CPU

 -Memory

 -Device Drivers

-I / O

 -Applications

There are four basic types of operating systems:

  -Real time operating system

 -Single user, single task operating system

 -Single user, multi task operating system

 -Multi user operating system

2. List three tangible (physical) resources of a computer system and explain how it works.

IRQ – Hardware devices use the IRQ bus on a motherboard to signal the CPU for attention.

port addresses – Software addresses a hardware device using the device’s port, or I/O, address. The device “listens” to the bus to determine if it is being requested.

memory addresses – Software communicates with physical memory located in either RAM or ROM chips using memory addresses.

DMA channel – Data travels back and forth between memory and a hardware device using this channel.

3. Explain the following:

      a. Internal Fragmentation. How does it occur?

                Internal fragmentation -A form of fragmentation that arises when allocations of memory are made only in multiples of a subunit. A request of arbitrary size must be met by rounding up to the next highest multiple of the subunit, leaving a small amount of memory that is allocated but not in use. Internal fragmentation is reduced by reducing the size of the subunit; such a reduction increases external fragmentation

      b. External Fragmentation.How does it occur?

In the external fragmentation, the free space, that is available for storage is divided into many small pieces. The storage space is of many different sizes. In dynamic memory allocation, a block might be requested, but the contiguous block has a free space. There are ten blocks of 300 bytes of free space, separated by allocated regions; one still cannot allocate the requested block of 1000 bytes.

 

External fragmentation also occurs in file systems as many files of different sizes are created, change size, and are deleted. The effect is even worse if a file which is divided into many small pieces is deleted, because this leaves similarly small regions of free space. In the external fragmentation, the process are moved into one large adjacent block, leaving all of the remaining free space in one large block. The garbage collectors are moved in order to improve dynamic memory allocation performance, and tools that disk drives perform.

c. Compaction. Why is it needed?

Compaction attacks the problem of fragmentation by moving all the allocated blocks to one end of memory, thus combining all the holes. Aside from the obvious cost of all that copying, there is an important limitation to compaction: Any pointers to a block need to be updated when the block is moved. Unless it is possible to find all such pointers, compaction is not possible. Pointers can stored in the allocated blocks themselves as well as other places in the client of the memory manager. In some situations, pointers can point not only to the start of blocks but also into their bodies. For example, if a block contains executable code, a branch instruction might be a pointer to another location in the same block. Compaction is performed in three phases. First, the new location of each block is calculated to determine the distance the block will be moved. Then each pointer is updated by adding to it the amount that the block it is pointing (in)to will be moved. Finally, the data is actually moved. There are various clever tricks possible to combine these operations.

 

4. Cache memory how it works?

 

CACHE

Pronounced cash, a special high-speed storage mechanism. It can be either a reserved section of main memory or an independent high-speed storage device. Two types of caching are commonly used in personal computers: memory caching and disk caching.

A memory cache, sometimes called a cache store or RAM cache, is a portion of memory made of high-speed static RAM (SRAM) instead of the slower and cheaper dynamic RAM (DRAM) used for main memory. Memory caching is effective because most programs access the same data or instructions over and over. By keeping as much of this information as possible in SRAM, the computer avoids accessing the slower DRAM.

Some memory caches are built into the architecture of microprocessors. The Intel 80486 microprocessor, for example, contains an 8K memory cache, and the Pentium has a 16K cache. Such internal caches are often called Level 1 (L1) caches. Most modern PCs also come with external cache memory, called Level 2 (L2) caches. These caches sit between the CPU and the DRAM. Like L1 caches, L2 caches are composed of SRAM but they are much larger.

Disk caching works under the same principle as memory caching, but instead of using high-speed SRAM, a disk cache uses conventional main memory. The most recently accessed data from the disk (as well as adjacent sectors) is stored in a memory buffer. When a program needs to access data from the disk, it first checks the disk cache to see if the data is there. Disk caching can dramatically improve the performance of applications, because accessing a byte of data in RAM can be thousands of times faster than accessing a byte on a hard disk.

When data is found in the cache, it is called a cache hit, and the effectiveness of a cache is judged by its hit rate. Many cache systems use a technique known as smart caching, in which the system can recognize certain types of frequently used data. The strategies for determining which information should be kept in the cache constitute some of the more interesting problems in computer science.

5. Which is the fastest cache's L1, L2 or L3? Why?

 

Level 1 (Primary) Cache

Level 1 or primary cache is the fastest memory on the PC. It is in fact, built directly into the processor itself. This cache is very small, generally from 8 KB to 64 KB, but it is extremely fast; it runs at the same speed as the processor. If the processor requests information and can find it in the level 1 cache, that is the best case, because the information is there immediately and the system does not have to wait. The level 1 cache is discussed in more detail here, in the section on processors.

 

Note: Level 1 cache is also sometimes called "internal" cache since it resides within the processor.

II. Memory Utilization Problem

 

Given the following information

Job Number

Memory Requested

 J1

700KB                                                          

J2

500KB

J3

740KB

J4

850KB

J5

610KB

Memory Block

Memory Requested

1132

700KB

1003

720KB

1114

800KB

2310

750KB

1755

610KB

 

a.       Use the best -fit algorithm to allocate the memory blocks to the five arriving jobs.

 

Memory Block

Memory Block Size

Job number

Job Size

Status

Internal Fragmentation

1132

700KB

J1

700KB

Busy

0KB

1003

720KB

J5

610KB

Busy

110KB

1114

800KB

Free

Free

Free

Free

2310

750KB

J3

740KB

Busy

10KB

1755

610KB

J2

500KB

Busy

110KB

            Total: 3580KB                                 Total:2550                             Total:230KB

b.      Use the first- fit algorithm to allocate the memory blocks to the five arriving jobs.

 

Memory Block

Memory Block Size

Job number

Job Size

Status

Internal Fragmentation

1132

700KB

J1

700KB

Busy

0KB

1003

720KB

J2

500KB

Busy

220KB

1114

800KB

J3

740KB

Busy

60KB

2310

750KB

J5

610KB

Busy

140KB

1755

610KB

Free

Free

Free

Free

             Total: 3580                                 Total:2550                                   Total:420

 

 

 

c.       Use the next –fit algorithm to allocate the memory jobs to the five arriving jobs.

Memory Block

Memory Block Size

Job number

Job Size

Status

Internal Fragmentation

1132

700KB

Free

Free

Free

Free

1003

720KB

J5

610KB

Busy

110KB

1114

800KB

J3

740Kb

Busy

60KB

2310

750KB

J1

700KB

Busy

50KB

1755

610KB

J2

500KB

Busy

110KB

           Total:3580KB                               Total:2550                                                 

 

d.      Use the worst-fit algorithm to allocate the memory blocks to the five arriving jobs.

Memory Block

Memory Block Size

Job number

Job Size

Status

Internal Fragmentation

1132

700KB

Free

Free

Free

 

1003

720KB

J5

610KB

Busy

110

1114

800KB

J1

700KB

Busy

100

2310

750KB

J2

500KB

Busy

250

1755

610KB

Free

Free

Free

 

 

2.       Given the Following information:

Job Number

Memory Requested

J1

30KB

J2

50KB

J3

30KB

J4

25KB

J5

35KB


ORIGINAL STATE OF MAIN MEMORY

100KB(P1)

 

 

25KB(P2)

25KB(P3)

50KB(P4)

 

30KB(P5)











a.       Create a memory layout for the fixed partition after job entry based on the information (Table 2A and Table 2B).


b.       Before Job 6(30KB) and Job7(45KB) arrives, there are three jobs done done ready for processing which J2,J3,J4. Create an initial memory layout for the dynamic partition based on the given information(Table 2A).

3.       Illustrate and find the page number with the displacement of a given program line: Job1 is 1600 lines long  PS=200 and LNTBL=542.

           

 542\200=271 page number



---------------End------------