David S. Hutton - Poughkeepsie NY, US Kathryn M. Jackson - Poughkeepsie NY, US Keith N. Langston - Woodstock NY, US Pak-kin Mak - Poughkeepsie NY, US Bruce Wagar - Tempe AZ, US
Assignee:
International Business Machines Corporation - Armonk NY
International Classification:
G06F 12/00
US Classification:
711130, 711122
Abstract:
A dual system shared cache directory structure for a cache memory performs the role of an inclusive shared system cache, i. e. , data, and system control, i. e. , coherency. The system includes two separate system cache directories in the shared system cache. The two separate cache directories are substantially equal in size and collectively large enough to contain all of the processor cache directory entries, but with only one of these separate cache directories hosting system-cache data to back the most recent fraction of data accessed by the processors. The other cache directory retains only addresses, including addresses of lines LRUed out from the first cache directory and the identity of the processor using the data. Thus by this expedient, only the directory known to be backed by system cached data will be evaluated for system cache memory data.
David S. Hutton - Poughkeepsie NY, US Kathryn M. Jackson - Poughkeepsie NY, US Keith N. Langston - Woodstock NY, US Pak-kin Mak - Poughkeepsie NY, US Chung-Lung K. Shum - Wappingers Falls NY, US
Assignee:
International Business Machines Corporation - Armonk NY
International Classification:
G06F 12/00
US Classification:
711141, 711118, 711119, 711144, 711146
Abstract:
Portions of data in a processor system are stored in a slower main memory and are transferred to a faster memory comprising a hierarchy of cache structures between one or more processors and the main memory. For a system with shared L2 cache(s) between the processor(s) and the main memory, an individual L1 cache of a processor must first communicate to an associated L2 cache(s), or check with such L2 cache(s), to obtain a copy of a particular line from a given cache location prior to, or upon modification, or appropriation of data at a given cached location. The individual L1 cache further includes provisions for notifying the L2 cache(s) upon determining when the data stored in the particular cache line in the L1 cache has been replaced, and when the particular cache line is disowned by an L1 cache, the L2 cache is updated to change the state of the particular cache line therein from an ownership state of exclusive to a particular identified CPU to an ownership state of exclusive to no CPU, thereby allowing reduction of cross interrogate delays during another processor acquisition of the same cache line.
Method And Apparatus For Implementing A Combined Data/Coherency Cache
Keith N. Langston - Woodstock NY, US Pak-kin Mak - Poughkeepsie NY, US Bruce A. Wagar - Hopewell Junction NY, US
Assignee:
International Business Machines Corporation - Armonk NY
International Classification:
G06F 12/00
US Classification:
711128, 711141, 711136, 711E12027
Abstract:
A method and apparatus for implementing a combined data/coherency cache for a shared memory multi-processor. The combined data/coherency cache includes a system cache with a number of entries. The method includes building a system cache directory with a number of entries equal to the number of entries of the system cache. The building includes designating a number of sub-entries for each entry which is determined by a number of sub-entries operable for performing system cache coherency functions. The building also includes providing a sub-entry logic designator for each entry, and mapping one of the sub-entries for each entry to the system cache via the sub-entry logic designator.
David S. Hutton - Poughkeepsie NY, US Kathryn M. Jackson - Poughkeepsie NY, US Keith N. Langston - Woodstock NY, US Pak-kin Mak - Poughkeepsie NY, US Chung-Lung K. Shum - Wappingers Falls NY, US
Assignee:
International Business Machines Corporation - Armonk NY
International Classification:
G06F 12/00
US Classification:
711141, 711118, 711119, 711122, 711144, 711146
Abstract:
Caching where portions of data are stored in slower main memory and are transferred to faster memory between one or more processors and the main memory. The cache is such that an individual cache system must communicate to other associated cache systems, or check with such cache systems, to determine if they contain a copy of a given cached location prior to or upon modification or appropriation of data at a given cached location. The cache further includes provisions for determining when the data stored in a particular memory location may be replaced.
David Hutton - Poughkeepsie NY, US Kathryn Jackson - Poughkeepsie NY, US Keith Langston - Woodstock NY, US Pak-kin Mak - Poughkeepsie NY, US Arthur O'Neill - Wappingers Falls NY, US
Assignee:
International Business Machines Corporation - Armonk NY
International Classification:
G06F 12/00
US Classification:
711122000
Abstract:
Using local change bit to direct the install state of the data line. A multi-processor system that having a plurality of individual processors where each of the processors has an associated L1 cache, and the multi-processor system has at least one shared main memory, and at least one shared L2 cache. The method described herein involves writing a data line into an L2 cache comprising and a local change bit to direct the install state of the data line.
Bias Filter Memory For Filtering Out Unnecessary Interrogations Of Cache Directories In A Multiprocessor System
Bradford M. Bean - New Paltz NY Keith N. Langston - Ulster Park NY Richard L. Partridge - Poughkeepsie NY
Assignee:
International Business Machines Corporation - Armonk NY
International Classification:
G06F 1300 G06F 1516
US Classification:
364200
Abstract:
The disclosed embodiments filter out many unnecessary interrogations of the cache directories of processors in a multiprocessor (MP) system, thereby reducing the required size of the buffer invalidation address stack (BIAS) with each associated processor, and increasing the efficiency of each processor by allowing it to access its cache during the machine cycles which in prior MP's had been required for invalidation interrogation. Invalidation interrogation of each remote processor cache directory may be done when each channel or processor generates a store request to a shared main storage. A filter memory is provided with each BIAS in the MP. The filter memory records the cache block address in each invalidation request transferred to its associated BIAS. The filter memory deletes an address when it is deleted from the cache directory and retains the most recent cache access requests. The filter memory may have one or more registers, or be an array.
Isbn (Books And Publications)
Cakavian Prosody: The Accentual Patterns of the Cakavian Dialects of Croatian