4.2 CPU Access


4.2.1 Message Passing Priority Scheduled Threads

To provide access to CPU compute cycles, the message passing priority scheduling thread provides a simple CPU scheduler, which helps the game manage multiple threads. Following are the attributes of this scheduling scheme:

Non-preemptive execution
The currently running thread will continue to run on the CPU until it wishes to yield. Preemption occurs if there is a need to service another higher-priority thread awakened by an interrupt event. The interrupt service thread must not consume extensive CPU cycles. In other words, preemption is only caused by interrupts. Preemption can also occur explicitly with a yield, or implicitly while waiting to receive a message.
Priority scheduling
A simple numerical priority determines which thread runs when a currently executing thread yields or an interrupt causes rescheduling.
Message passing
Threads communicate with each other through messages. One thread writes a message into a queue for another thread to retrieve.
Interrupt messages
An application can associate a message to a particular thread with an interrupt.

4.2.2 CPU Data Cache

N64 CPU has a write back data cache. When the CPU reads data, the cache may satisfy the read request eliminating the extra cycles needed to access main memory. When the CPU writes data, the data is written to the cache first and then flushed to main memory at some point in the future. Therefore, when the CPU modifies data for the RCP's or I/O DMA engine's consumption via memory, the software must perform explicit cache flushing. The application can choose to flush the entire cache or a particular memory segment. If the cache is not flushed, the RCP or DMA may get stale data from main memory.

Before the RCP or I/O DMA engines produce data for the CPU to process, the internal CPU caches must be explicitly invalidated. This is to avoid the CPU from examining old data in the cache. The invalidation must occur before the RCP or DMA engine place the data in main memory. Otherwise, there is a chance that a write back of data in the cache will destroy the new data in main memory.


4.2.3 No Default Memory Management

As described above, the Nintendo 64 operating system provides multi-threaded message-passing execution control. The operating system does not impose a default memory management model. It does provide a generic Translation Lookaside Buffer (TLB) access. The application can use the TLB to provide for a variety of operations such as virtual contiguous memory or memory protection. For example, an application can use TLBs to protect against stack overflows.


4.2.4 Timers

Simple timer facilities are provided, useful for performance profiling, real-time scheduling, or game timing. See the man page for osGetTime for more information.


4.2.5 Variable TLB Page Sizes

N64 CPU has variable Translation Lookaside Buffer (TLB) page size capability. This can provide additional, useful functionality such as the "poor man's 2-way set associative cache", because the data cache is 8 KB of direct-mapped memory and TLB pages size can be set to 4 KB. The application can roll a 4 KB cache window through a contiguous chunk of memory without wiping out the other 4 KB in cache.


4.2.6 CoProcessor 0 Access

A set of application programming interfaces (APIs) are also provided for co-processor 0 register access, including CPU cycle accurate timer, cause of exception, and status.


4.2.7 I/O Access and Management

The I/O subsystem provides functional access to the individual I/O hardware sub-components. Most functions provide for logical translation to raw physical access to the I/O device.

Figure 4-2 I/O Access and Management Software Components
[Figure 4-2]

4.2.8 PI Manager

The N64 also provides a peripheral interface (PI) device manager for multiple threads to access the peripheral device. This device manager manages thread access for multiple threads to access the peripheral device. For example, the audio thread may want to page in the next set of audio samples, while the graphics thread needs to page in a future database. The PI manager is a thread that waits for commands to be placed in a message queue. At the completion of the command, a message is sent to the thread that requested the DMA.


4.2.9 VI Manager

A simple video interface (VI) device manager keeps track of when vertical retrace and graphics rendering is complete. It also updates the proper video modes for the new video field. The VI manager can send a message to the game application on a vertical retrace. The game can use this to synchronize rendering the next frame.