Friday, December 4, 2009

VIRTUAL MEMORY is a common part of most operating systems on desktop computers. It has become so common because it provides a big benefit for users at a very low cost.

Most computers today have something like 64 or 128 megabytes of RAM (random-access memory) available for use by the CPU (central processing unit). Often, that amount of RAM is not enough to run all of the programs that most users expect to run at once. For example, if you load the Windows operating system, an e-mail program, a Web browser and word processor into RAM simultaneously, 64 megabytes is not enough to hold it all. If there were no such thing as virtual memory, your computer would have to say, "Sorry, you cannot load any more applications. Please close an application to load a new one." With virtual memory, the computer can look for areas of RAM that have not been used recently and copy them onto the hard disk. This frees up space in RAM to load the new application. Because it does this automatically, you don't even know it is happening, and it makes your computer feel like is has unlimited RAM space even though it has only 32 megabytes installed. Because hard-disk space is so much cheaper than RAM chips, virtual memory also provides a nice economic benefit.

The area of the hard disk that stores the RAM image is called a page file. It holds pages of RAM on the hard disk, and the operating system moves data back and forth between the page file and RAM. (On a Windows machine, page files have a .SWP extension.)

Of course, the read/write speed of a hard drive is much slower than RAM, and the technology of a hard drive is not geared toward accessing small pieces of data at a time. If your system has to rely too heavily on virtual memory, you will notice a significant performance drop. The key is to have enough RAM to handle everything you tend to work on simultaneously. Then, the only time you "feel" the slowness of virtual memory is in the slight pause that occurs when you change tasks. When you have enough RAM for your needs, virtual memory works beautifully. When you don't, the operating system has to constantly swap information back and forth between RAM and the hard disk. This is called thrashing, and it can make your computer feel
incredibly slow.

How Virtual Memory Works?


Most computers today have something like 32 or 64 megabytes of RAM available for the CPU to use (see How RAM Works for details on RAM). Unfortunately, that amount of RAM is not enough to run all of the programs that most users expect to run at once.
For example, if you load the operating system, an e-mail program, a Web browser and word processor into RAM simultaneously, 32 megabytes is not enough to hold it all. If there were no such thing as virtual memory, then once you filled up the available RAM your computer would have to say, "Sorry, you can not load any more applications. Please close another application to load a new one." With virtual memory, what the computer can do is look at RAM for areas that have not been used recently and copy them onto the hard disk. This frees up space in RAM to load the new application.
Because this copying happens automatically, you don't even know it is happening, and it makes your computer feel like is has unlimited RAM space even though it only has 32 megabytes installed. Because hard disk space is so much cheaper than RAM chips, it also has a nice economic benefit.
The read/write speed of a hard drive is much slower than RAM, and the technology of a hard drive is not geared toward accessing small pieces of data at a time. If your system has to rely too heavily on virtual memory, you will notice a significant performance drop. The key is to have enough RAM to handle everything you tend to work on simultaneously -- then, the only time you "feel" the slowness of virtual memory is is when there's a slight pause when you're changing tasks. When that's the case, virtual memory is perfect.
When it is not the case, the operating system has to constantly swap information back and forth between RAM and the hard disk. This is called thrashing, and it can make your computer feel incredibly slow.
The area of the hard disk that stores the RAM image is called a page file. It holds pages of RAM on the hard disk, and the operating system moves data back and forth between the page file and RAM. On a Windows machine, page files have a .SWP extension.


HOW VIRTUAL MEMORY WORKS


When a computer is running, many programs are simulataneously sharing the CPU. Each running program, plus the data structures needed to manage it, is called a process.

Each process is allocated an address space. This is a set of valid addresses that can be used. This address space can be changed dynamically. For example, the program might request additional memory (from dynamic memory allocation) from the operating system.

If a process tries to access an address that is not part of its address space, an error occurs, and the operating system takes over, usually killing the process (core dumps, etc).

How does virtual memory play a role? As you run a program, it generates addresses. Addresses are generated (for RISC machines) in one of three ways:
A load instruction
A store instruction
Fetching an instruction
Load/store create data addresses, while fetching an instruction creates instruction addresses. Of course, RAM doesn't distinguish between the two kinds of addresses. It just sees it as an address.

Each address generated by a program is considered virtual. It must be translated to a real physical address. Thus, address tranlation is occuring all the time. As you might imagine, this must be handled in hardware, if it's to be done efficiently.

You might think translating each address from virtual to physical is a crazy idea, because of how slow it is. However, you get memory protection from address translation, so it's worth the hardware needed to get memory protection.
Paging
In a cache, we fetched quantities called data blocks or cache lines. Those are typically somewhere between, say, 4 and 64 bytes.

There is a corresponding terminology in virtual memory to a cache line. It's called a page.

A page is a sequence of N bytes where N is a power of 2.

These days, page sizes are at least 4K in size and maybe as large as 64 K or more.

Let's assume that we have 1M of RAM. RAM is also called physical memory. We can subdivide the RAM into 4K pages. Thus 1M / 4K = 256 pages. Thus, our RAM has 256 physical pages, weach holding 4K.

Let's assume we have 10 M of disk. Thus, we have 2560 disk pages.

In principle, each program may have up to 4 G of address space. Thus, it can, in principle, access 220 virtual pages. In reality, many of those pages are considered invalid pages.
Page Tables
How is an address translated from virtual to physical? First, like the cache, we split up a 32 bit virtual address into a virtual page (which is like a tag) and a page offset.



If this looks a lot like a fully-associative cache, but whose offset is much much larger, it's because that's basically what it is.

We must convert the virtual page number to a physical page number. In our example, the virtual page consists of 20 bits. A page table is a data structure which consists of 220 page table entries (PTEs). Think of the page table as an array of page table entries, indexed by the virtual page number.

The page table's index starts at 0, and ends at 220 - 1.

Here's how it looks:




Suppose your program generates a virtual address. You'd extract bits B31-12 to get the virtual page number. Use that as an index into the above page table to access the page table entry (PTE).

Each PTE consists of a valid bit and a 20 bit physical page (it's 20 bits, because we assume we have 1M of RAM, and 1M of RAM requires 20 bits to access each byte). If the valid bit is 1, then the virtual page is in RAM, and you can get the physical page from the PTE. This is called a page hit, and is basically the same as a cache hit.

If the valid bit is 0, the page is not in RAM, and the 20 bit physical page is meaningless. This means, we must get the disk page corresponding to the virtual page from disk and place it into a page in RAM. This is called a page fault.

Because disk access is slow, slow, slow, we want to minimize the number of page faults.

In general, this is done by making RAM fully associative. That is, any disk page can go into any RAM page (disk, RAM, and virtual pages all have the same size).

In practice, some pages in RAM are reserved for the operating system to make the OS run efficiently.
Translation
Suppose your program generated the following virtual address F0F0F0F0hex (which is 1111 0000 1111 0000 1111 0000 1111 0000 two). How would you translate this to a physical address? First, you would split the address into a virtual page, and a page offset (see below).

Then, you'd see if the virtual page had a corresponding physical page in RAM using the page table. If the valid bit of the PTE is 1, then you'd translate the virtual page to a physical page, and append the page offset. That would give you a physical address in RAM.




Huge Page Tables
Page tables can be very large. If every virtual page was valid, our page table would be 220 X 21 bits. This is about 3 Megs just for one program's page table. If there are many programs, there are many tables, each occupying a lot of memory.

What's worse, the page tables we've been talking about are incomplete. If we have a page fault, we need to find the page on disk. Where is it located? That information is kept in another page table, which is indexed by the virtual page (same as the page table we talked about), and tells you where on disk to find it. Then, we have to copy that page to RAM, and update the first page table. Thus, we need two page table tables!

These page tables are basically just data. Thus, they occupy memory as any data occupies memory. When we switch from one process to another, we need to load its page table in RAM for easy access. It's useful to keep it located in certain parts of RAM for just such a purpose. If RAM is suitable large, we can have several processes' page tables in RAM at the same time.

A page table register can hold the physical address of the page table that's currently active to get quick access. Still, these are large, and we may want to find ways to speed things up.
Inverted Page Tables
There are many schemes to reduce the size of a page table. One way is to use a hierarchy. Thus, we might have two layers of pages. Bits B31-22 might tell you the first layer, while B21-13 might tell you the second layer.

Another idea is to use a kind of closed hash table. The hash table's size is based on the number of physical pages. The number of physical pages is usually a lot smaller than the number of all virtual pages put together.

A hash function takes a virtual page number as input, and produces an index into the hash table as the result. Each entry of the hash table consists of a virtual page number and a physical page number. You check to see if the virtual page number matched, and if so, then you use the physical page.

If it missed, then you must resolve the collision based on the hash table. In practice, you may need the number of entries of the hash table to be a few times larger than the number of physical pages, to avoid excessive collisions.

An inverted page table takes longer to access because you may have collisions, but it takes up a lot less memory. It helps with page hits. However, if you have a page fault, you still need a page table that maps virtual pages to disk pages, and that will be large.
Translation Lookaside Buffer (TLB)
What's the cost of address translation? For each virtual address, we must access the page table to find the PTE corresponding to the virtual page. We look up the physical page from the PTE, and construct a physical address. Then, we access RAM at the physical address. That's two memory acccess: one to access the PTE, one more to access the data in RAM. Cache helps us cut down the amount of time to access memory, but that's only if we have cache hits.

The idea of a TLB is to create a special cache for translations. Here's one example of a TLB.






Each row in the TLB is like one slot of a cache. Assume we have 64 rows. When you have a virtual address, you can split it into a virtual page and an offset.

In parallel, compare the virtual page to all of the entries of the TLB (say, 64). There should be, at most, one match. Just like a fully associative cache, you want to check if the TLB entry is valid.

If a TLB hit occurs, replace the virtual page with a physical page to create a physical address.

If there's a TLB miss, then it's still possible that the virtual page resides in RAM. You must now look up the PTE (page table entry) to see if this is the case. If the PTE says the virtual page is in RAM, then you can update the TLB, so that it has a correct virtual to physical page translation.

The TLB is designed to only store a limited subset of virtual to physical page translation. It is really just a cache for the page table, storing only the most frequently used translations.

The TLB can be kept small enough that it can be fully associative. However, some CPU designers make larger TLBs that are direct mapped or set associative.
Memory Protection
How does virtual addresses give us memory protection? Suppose you work at a post office, which assigns post boxes to individuals. You have a person who comes in and says they want three post office boxes: 100, 101, and 102.

They insist on using those numbers. Another customer comes in, and insists on using those numbers too. How can you accomodate both customers?

Basically, you cheat. You tell the first customer you have boxes 100, 101, and 102, but you assign him boxes 200, 201, and 202. Similarly, you tell the second customer that you also have boxes 100, 101, and 102, but you assign her boxes 320, 321, and 322.

Whenever customer 1 wants the mail in box 100, you translate it to box 200. Whenever customer 2 wants mail in box 100, you translate it to box 320. Thus, your two customers get to use the box numbers they want, and through the magic of translation, they two customers avoid using each other's boxes.

If the post office wanted to reserve its own boxes for its own use, it could reserve boxes 1 through 100 to itself, and never assign those boxes, directly or indirectly to a customer. Even if a customer wants box 50, they can be assigned box 150, safely outside the range of boxes reserved for the post office.

This same analogy applies to real programs. Each program can assume it uses the same set of 32 bit virtual addresses. We just make sure that those virtual pages do not map to the same disk page, nor to the same physical page.

Who's job is it to assign the pages? It's the operating system. When a program starts up, it will want a certain range of addresses. The operating system creates a page table for the program, making sure the disk pages it uses do not conflict with the disk pages of other programs.
Invalid Pages
Sometimes you don't really want a program to access all possible 32 bit addresses. This helps reduce the total size of the page table. One way to prevent a user program from accessing invalid pages is make certain virtual pages entries invalid. Thus, an attempt to translate virtual to physical page will fail, and even looking up the virtual page on disk fails.
Page Replacement Schemes
Like cache, you can have page replacement schemes based on FIFO, LRU, LFU, etc. In general, page replacement schemes can be more sophisticated because getting a page off disk is really slow, so you can afford to take more time to make a better choice.
Dirty Bit
In reality, caches usually don't have dirty bits. This means that you must always write back if a cache line is evicted.

However, because disk access is slower, it makes sense to use dirty bits for pages. Thus, if a page hasn't been modified (maybe because it's read only), there's no reason to copy it back to disk. Just toss it out.
Cache
Virtual memory works with caching. Basically, once the virtual address is translated to a physical address, then the physical address is passed to the cache, which checks to see if there is a cache hit.
The Oblivious Programmer
As with cache, assembly lanugage programmers don't have to worry about virtual memory. They just see "memory". They don't do anything in particular whether there's virtual memory or not. Virtual memory is handled partly by hardware (translation mechanism) and partly by the operating system (sets up page table, handles page faults, etc).

This is good because at one point, programmers had to worry very much about whether a chunk of memory resided on the disk or in RAM. Programmers had to spend a great deal of effort managing this, and it distracted them from coding. With virtual memory, the management of disk as an extension of RAM is handled automatically.
Shared Memory
You can share memory between two processes by mapping the virtual page to the same disk page. When that disk page is resident in physical memory, then both processes can access the same location.

There may be issues of synchronization to handle, but that's a topic that's best left to a course in operating systems. Suffice it to say that we do have a way to map virtual pages to the same disk page to allow for sharing.

Sharing is available when you want two processes to collaborate in a somewhat safe manner. Both processes, for the most part, have their own memory. They only share a small region between the two of them.


VIRTUAL MEMORY IN WINDOWS

processes always reference memory using virtual memory addresses which are automatically translated to real (RAM) addresses by the hardware. Only core parts of the operating system kernel bypass this address translation and use real memory addresses directly.

Virtual Memory is always in use, even when the memory required by all running processes does not exceed the amount of RAM installed on the system.

An expanded version of this article is available at http://members.shaw.ca/bsanders/WindowsGeneralWeb/RAMVirtualMemoryPageFileEtc.htm.


Processes and Address Spaces

All processes (e.g. application executables) running under 32 bit Windows gets virtual memory addresses (a Virtual Address Space) going from 0 to 4,294,967,295 (2*32-1 = 4 GB), no matter how much RAM is actually installed on the computer.

In the default Windows OS configuration, 2 GB of this virtual address space are designated for each process’ private use and the other 2 GB are shared between all processes and the operating system. Normally, applications (e.g. Notepad, Word, Excel, Acrobat Reader) use only a small fraction of the 2GB of private address space. The operating system only assigns RAM page frames to virtual memory pages that are in use.

Physical Address Extension (PAE) is the feature of the Intel 32 bit architecture that expands the physical memory (RAM) address to 36 bits (see KB articles 268363 and 261988). PAE does not change the size of the virtual address space, which remains at 4 GB, just the amount of actual RAM that can be addressed by the processor.

The translation between the 32 bit virtual memory address used by the code running in a process and the 36 bit RAM address is handled automatically and transparently by the computer hardware according to translation tables maintained by the operating system. Any virtual memory page (32 bit address) can be associated with any physical RAM page (36 bit address).

Here's a list of how much RAM the various Windows versions and editions support (as of Nov 2004):

Windows NT 4.0: 4 GB
Windows 2000 Professional: 4 GB
Windows 2000 Standard Server: 4 GB
Windows 2000 Advanced Server: 8GB
Windows 2000 Datacenter Server: 32GB
Windows XP Professional: 4 GB
Windows Server 2003 Web Edition: 2 GB
Windows Server 2003 Standard Edition: 4 GB
Windows Server 2003 Enterprise Edition: 32 GB
Windows Server 2003 Datacenter Edition: 64 GB

Pagefile

RAM is a limited resource, whereas virtual memory is, for most practical purposes, unlimited. There can be a large number of processes each with its own 2 GB of private virtual address space. When the memory in use by all the existing processes exceeds the amount of RAM available, the operating system will move pages (4 KB pieces) of one or more virtual address spaces to the computer’s hard disk, thus freeing that RAM frame for other uses. In Windows systems, these “paged out” pages are stored in one or more files called pagefile.sys in the root of a partition. There can be one such file in each disk partition. The location and size of the page file is configured in SystemProperties, Advanced, Performance (click the Settings button).

A frequently asked question is how big should I make the pagefile? There is no single answer to this question, because it depends on the amount of installed RAM and how much virtual memory that workload requires. If there is no other information available, the normal recommendation of 1.5 times the amount of RAM in the computer is a good place to start. On server systems, a common objective is to have enough RAM so that there is never a shortage and the pagefile is essentially, not used. On these systems, having a really large pagefile may serve no useful purpose. On the other hand, disk space is usually plentiful, so having a large pagefile (e.g. 1.5 times the installed RAM) does not cause a problem and eliminates the need to fuss over how large to make it.

Performance, Architectural Limits and RAM

On any computer system, as load (number of users, amount of work being done) increases, performance (how long it takes to do each task) will decrease, but in a non linear fashion. Any increase in load (demand) beyond a certain point will result in a dramatic decrease in performance. This means that some resource is in critically short supply and has become a bottleneck.

At some point, the resource in critical short supply can not be increased. This means an architectural limit has been reached. Some commonly reported architectural limits in Windows include:

1. 2 GB of shared virtual address space for the system
2. 2 GB of private virtual address space per process
3. 660 MB System PTE storage
4. 470 MB paged pool storage
5. 256 MB nonpaged pool storage

The above applies to Windows 2003 Server specifically (from Knowledgebase article 294418), but also apply to Windows XP and Windows 2000.

Commonly found and quoted statements such as:

with a Terminal Server, the 2 GB of shared address space will be completely used before 4 GB of RAM is used

may be true in some situations, but you need to monitor your system to know whether they apply to your particular system or not. In some cases, these statements are conclusions from specific Windows NT 4.0 or Windows 2000 environments and don't necessarily apply to Windows Server 2003. Significant changes were made to Windows Server 2003 to reduce the likelihood that these architectural limits will in fact be reached in practice. For example, some processes that were in the kernel have been moved to non kernel processes to reduce the amount of memory used in the shared virtual address space.

Monitoring RAM and Virtual Memory usage

Performance Monitor (Start, Administrative Tools, Performance) is the principle tool for monitoring system performance and identifying what the bottleneck really is. Here's a summary of some important counters and what they tell you.

Memory, Committed Bytes - this is a measure of the demand for virtual memory
It shows how many bytes have been allocated by processes and to which the operating system has committed a RAM page frame or a page slot in the pagefile (perhaps both). As Committed Bytes grows above the amount of available RAM, paging will increase and the amount of the pagefile in use will also increase. At some point, paging activity will start to significantly impact perceived performance.

Process, Working Set, _Total - this is a measure of the amount of virtual memory in "active" use
It shows how much RAM is required so that the actively used virtual memory for all processes is in RAM. This is always a multiple of 4,096, which is the page size used in Windows. As demand for virtual memory increases above the available RAM, the operating system will adjust how much of a process's virtual memory is in its Working Set to optimize the use of available RAM and minimize paging.

Paging File, %pagefile in use - this is a measure of how much of the pagefile is actually being used.
This is the counter to use to determine if the pagefile is an appropriate size. If this counter gets to 100, the pagefile is completely full and things will stop working. Depending on the volatility of your workload, you probably want the pagefile large enough so that it is normally no more than 50 - 75% used. If a lot of the pagefile is in use, having more than one on different physical disks, may improve performance.

Memory, Pages/Sec - this is one of the most misunderstood measures.
A high value for this counter does not necessarily imply that your performance bottleneck is shortage of RAM. The operating system uses the paging system for purposes other than swapping pages due to memory over commitment.

Memory, Pages Output/Sec - this shows how many virtual memory pages were written to the pagefile to free RAM page frames for other purposes each second.
This is the best counter to monitor if you suspect that paging is your performance bottleneck. Even if Committed Bytes is greater than the installed RAM, if Pages Output/sec is low or zero most of the time, there is not a significant performance problem from not enough RAM.

Memory, Cache Bytes
Memory, Pool Nonpaged Bytes
Memory, Pool Paged Bytes
Memory, System Code Total Bytes
Memory, System Driver Total Bytes
The sum of these counters is a measure of how much of the 2GB of the shared part of the 4 GB virtual address space is actually in use. Use these to determine if your system is reaching one of the architectural limits discussed above.

Memory, Available MBytes - this measures how much RAM is available to satisfy demands for virtual memory (either new allocations, or for restoring a page from the pagefile).
When RAM is in short supply (e.g. Committed Bytes is greater than installed RAM), the operating system will attempt to keep a certain fraction of installed RAM available for immediate use by copying virtual memory pages that are not in active use to the pagefile. For this reason, this counter will not go to zero and is not necessarily a good indication of whether your system is short of RAM.


VIRTUAL MEMORY IN LINUX

Nearly every VMM interaction involves the MMU, or Memory Management Unit, excluding the disk subsystem. The MMU allows the operating system to access memory through virtual addresses by using data structures to track these translations. Its main job is to translate these virtual addresses into physical addresses, so that the right section of RAM is accessed.

The Zoned Buddy Allocator interacts directly with the MMU, providing valid pages when the kernel asks for them. It also manages lists of pages and keeps track of different categories of memory addresses.

The Slab Allocator is another layer in front of the Buddy Allocator, and provides the ability to create cache of memory objects in memory. On x86 hardware, pages of memory must be allocated in 4KB blocks, but the Slab Allocator allows the kernel to store objects that are differently sized, and will manage and allocate real pages appropriately.

Finally, a few kernel tasks run to manage specific aspects of the VMM. Bdflush manages block device pages (disk IO), and kswapd handles swapping pages to disk.

Pages of memory are either Free (available to allocate), Active (in use), or Inactive. Inactive pages of memory are either dirty or clean, depending on if it has been selected for removal yet or not. An inactive, dirty page is no longer in use, but is not yet available for re-use. The operating system must scan for dirty pages, and decide to deallocate them. After they have been guaranteed sync’d to disk, an inactive page my be “clean,” or ready for re-use.
Tuning the VMM

Tunable parameters may be adjusted in real-time via the proc fils system, but to persist across a reboot, /etc/sysctl.conf is the preferred method. Parameters can be entered in real-time via the sysctl command, and then recorded in the configuration file for reboot persistence.

You can adjust everything from the interval at which pages are scanned to the amount of memory to reserve for pagecache use. Let’s see a few examples.

Often we’ll want to optimize a system for IO performance. A busy database server, for example, is generally only going to run the database, and it doesn’t matter if the user experience is good or not. If the system doesn’t require much memory for user applications, decreasing the available bdflush tunables is beneficial. The specific parameters being adjusted are just too lengthy to explain here, but definitely look into them if you wish to adjust the values further. They are fully explained in vm.txt, usually located at /usr/src/linux/Documenation/sysctl/vm.txt.

In general, an IO-heavy server will benefit from the following settings in sysctl.conf:

vm.bdflush="100 5000 640 2560 150 30000 5000 1884 2"

The pagecache values control how much memory is used for pagecache. The amount of pagecache allowed translates directly to how many programs and open files can be held in memory.

The three tunable parameters with pagecache are:
Min: the minimum amount of memory reserved for pagecache
Borrow: the percentage of pages used in the process of reclaiming pages
Max: percentage at which kswapd will only page pagecache pages; once it falls below, it can swap out process pages again

On a file server, we’d want to increase the amount of pagecache available, so that data isn’t moved to disk as often. Using vm.pagecache="10 50 100" provides more caching, allowing larger and less frequent disk writes for file IO intensive work loads.

On a single-user machine, say your workstation, large number will keep pages in memory, allowing programs to execute faster. Once the upper limit is reached, however, you will start swapping constantly.

Conversely, a server with many users that frequently executes many different programs will not want high amounts of pagecache. The pagecache can easily eat up available memory if it’s too large, so something like vm.pagecache="10 20 30" is a good compromise.

Finally, the swappiness and vm.overcommit parameters are also very powerful. The overcommit number can be used to allow more memory allocation than RAM exists, which allows you to overcommit the amount of pages. Programs that have a habit of trying to allocate many gigabytes of memory are a hassle, and frequently they don’t use nearly that much memory. Upping the overcommit factor will allow these allocations to happen, but if the application really does use all the RAM, you’ll be swapping like crazy in no time (or worse: running out of swap).

The swappiness concept is heavily debated. If you want to decrease the amount of swapping done by the system, just echo a small number of the range 0-100 into /proc/sys/vm/swappiness. You don’t generally want to play with this, as it its more mysterious and non-deterministic than the advanced parameters described above. In general, you want applications to swap to avoid using memory for no reason. Task-specific servers, where you know the amount of RAM and the application requirements, are best suited for swappiness tuning (using a low number to decrease swapping).

These parameters all require a bit of testing, but in the end, you can dramatically increase the performance of many types of servers. The common case of disappointing disk performance stands to gain the most: Give the settings a try before going out and buying a faster disk array.


HOW IS VIRTUAL MEMORY HANDLED IN MAC OS X?



Memory, or RAM, is handled differently in Mac OS X than it was in earlier versions of the Mac OS. In earlier versions of the Mac OS, each program had assigned to it an amount of RAM the program could use. Users could turn on Virtual Memory, which uses part of the system's hard drive as extra RAM, if the system needed it.

In contrast, Mac OS X uses a completely different memory management system. All programs can use an almost unlimited amount of memory, which is allocated to the application on an as-needed basis. Mac OS X will generously load as much of a program into RAM as they can, even parts that may not currently be in use. This may inflate the amount of actual RAM being used by the system. When RAM is needed, the system will swap or page out those pieces not needed or not currently in use. It is important to bear this in mind because a casual examination of memory usage with the top command via the Terminal application will reveal large amounts of RAM being used by applications. (The Terminal application allows users to access the UNIX operating system which is the foundation of Mac OS X.) When needed, the system will dynamically allocate additional virtual memory so there is no need for users try to tamper with how the system handles additional memory needs. However, there is no substitute for having additional physical RAM.

Most Macintoshes produced in the past few years have shipped with either 128 or 256 MB of RAM. Although Apple claims that the minimum amount of RAM that's needed to run Mac OS X is 128 MB, users will find having at least 256 MB is necessary to work in a productive way and having 512 MB is preferable.

Starting with Mac OS 10.4 (Tiger) the minimum will be raised to 256 MB of RAM. Most new Macintoshes are shipping with 512 MB of RAM. For systems which have only 256 MB of RAM it is advisable for users to have at least 512 MB of RAM in order to run applications effectively.

Mac OS 10.5 (Leopard) requires at least 512 MB of RAM. Most users will find that a minimum of 1 GB of RAM is desirable. Less than 1 GB means the system will have to do make use of virtual memory which will adversely affect system performance.

No comments:

Post a Comment