Search

James Pomerene Phones & Addresses

  • 412 N Calvin Park Blvd, Rockford, IL 61107 (779) 423-1375
  • 403 Bedford Rd, Chappaqua, NY 10514 (914) 238-3860
  • Leesburg, VA
  • Millbury, MA
  • Boca Raton, FL
  • San Diego, CA
  • New Cassel, NY
  • 412 N Calvin Park Blvd, Rockford, IL 61107 (619) 865-4314

Work

Position: Service Occupations

Publications

Wikipedia

James H. Pomerene

View page

James Herbert Pomerene (June 22, 1920 December 7, 2008) was an electrical engineer and computer pioneer.

Us Patents

Multiple Sequence Processor System

View page
US Patent:
52972810, Mar 22, 1994
Filed:
Feb 13, 1992
Appl. No.:
7/836193
Inventors:
Philip G. Emma - Danbury CT
Joshua W. Knight - Mohegan Lake NY
James H. Pomerene - Chappaqua NY
Rudolph N. Rechtschaffen - Scarsdale NY
Frank J. Sparacio - Sarasota FL
Assignee:
International Business Machines Corporation - Armonk NY
International Classification:
G06F 938
G06F 928
US Classification:
395650
Abstract:
A digital computer includes a main and an auxiliary pipeline processor which are configured to concurrently execute contiguous groups of instructions taken from a single instruction sequence. The instructions in a sequence may be divided into groups by using either taken-branch instructions or certain instructions which may change the contents of the general purpose registers as group delimiters. Both methods of grouping the instructions use a branch history table to predict the sequence in which the instructions will be executed.

Cache Miss Facility With Stored Sequences For Data Fetching

View page
US Patent:
52337022, Aug 3, 1993
Filed:
Aug 7, 1989
Appl. No.:
7/390587
Inventors:
Philip G. Emma - Danbury CT
Joshua W. Knight - Mohegan Lake NY
James H. Pomerene - Chappaqua NY
Thomas R. Puzak - Ridgefield CT
Rudolph N. Rechtschaffen - Scarsdale NY
Assignee:
International Business Machines Corporation - Armonk NY
International Classification:
G06F 932
G06F 1200
G06F 1202
G06F 1300
US Classification:
395425
Abstract:
A cache memory system develops an optimum sequence for transferring data values between a main memory and a line buffer internal to the cache. At the end of a line transfer, the data in the line buffer is written into the cache memory as a block. Following an initial cache miss, the cache memory system monitors the sequence of data requests received for data in the line that is being read in from main memory. If the sequence being used to read in the data causes the processor to wait for a specific data value in the line, a new sequence is generated in which the specific data value is read at an earlier time in the transfer cycle. This sequence is associated with the instruction that caused the first miss and is used for subsequent misses caused by the instruction. If, in the process of handling a first miss related to a specific instruction, a second miss occurs which is caused by the same instruction but which is for data in a different line of memory, the sequence associated with the instruction is marked as an ephemeral miss. Data transferred to the line buffer in response to an ephemeral miss is not stored in the cache memory and limited to that portion of the line accessed within the line buffer.

Cache Memory Architecture With Decoding

View page
US Patent:
44371497, Mar 13, 1984
Filed:
Nov 17, 1980
Appl. No.:
6/207481
Inventors:
James H. Pomerene - Chappaqua NY
Rudolph N. Rechtschaffen - Scarsdale NY
Assignee:
International Business Machines Corporation - Armonk NY
International Classification:
G06F 930
G06F 1300
US Classification:
364200
Abstract:
An information processing unit and storage system comprising at least one low speed, high capacity main memory having relatively long access time and including a plurality of data pages stored therein and at least one high speed, low capacity Cache memory means having a relatively short access time and adapted to store a predetermined plurality of subsets of the information stored in said main memory data pages. Instruction decoding means are located in the communication channel between the main Memory and the Cache which are operative to at least partially decode instructions being transferred from main Memory to Cache. The at least partial decoding comprising expanding the instruction format from that utilized in the main Memory storage to one more readily executable by the processor prior to storing said instructions in the Cache. Said decoding means includes a logic circuit means for determining whether a given instruction is susceptible of partial decoding and means for determining that a particular instruction has already been partially decoded (i. e. , after a first accessing of said instruction by the processor from Cache).

Subroutine Return Through Branch History Table

View page
US Patent:
52768821, Jan 4, 1994
Filed:
Jul 27, 1990
Appl. No.:
7/558998
Inventors:
Philip G. Emma - Danbury CT
Joshua W. Knight - Mohegan Lake NY
James H. Pomerene - Chappaqua NY
Rudolph N. Rechtschaffen - Scarsdale NY
Frank J. Sparacio - Sarasota FL
Charles F. Webb - Poughkeepsie NY
Assignee:
International Business Machines Corp. - Armonk NY
International Classification:
G06F 942
G06F 938
US Classification:
395700
Abstract:
Method and apparatus for correctly predicting an outcome of a branch instruction in a system of the type that includes a Branch History Table (BHT) and branch instructions that implement non-explicit subroutine calls and returns. Entries in the BHT have two additional stage fields including a CALL field to indicate that the branch entry corresponds to a branch that may implement a subroutine call and a PSEUDO field. The PSEUDO field represents linkage information and creates a link between a subroutine entry and a subroutine return. A target address of a successful branch instruction is used to search the BHT. The branch is known to be a subroutine return if a target quadword contains an entry prior to a target halfword that has the CALL field set. The entry with the CALL bit set is thus known to be the corresponding subroutine call, and the entry point to the subroutine is given by the target address stored within the entry. A PSEUDO entry is inserted into the BHT at the location corresponding to the entry point of the subroutine, the PSEUDO entry being designated as such by having the PSEUDO field asserted.

Cache Remapping Using Synonym Classes

View page
US Patent:
55840027, Dec 10, 1996
Filed:
Feb 22, 1993
Appl. No.:
8/021010
Inventors:
Philip G. Emma - Danbury CT
Joshua W. Knight - Mohegan Lake NY
Keith N. Langston - Ulster Park NY
James H. Pomerene - Chappaqua NY
Thomas R. Puzak - Ridgefield CT
Assignee:
International Business Machines Corporation - Armonk NY
International Classification:
G06F 1120
US Classification:
395403
Abstract:
A method for addressing data in a cache unit which has a plurality of congruence classes, following a failure which disables one or more of the congruence classes in the cache unit. A plurality of synonym classes are established. A subset of the congruence classes is assigned to each of the synonym classes. Any disabled congruence classes are identified. The synonym class to which the disabled congruence class belongs is identified. An alternate congruence class is selected which belongs to the same synonym class as the disabled congruence class. When a request is received by the cache to store a line of data into the disabled congruence class, the line is stored into the alternate congruence class in response to the request.

Apparatus And Method For Prefetching Subblocks From A Low Speed Memory To A High Speed Memory Of A Memory Hierarchy Depending Upon State Of Replacing Bit In The Low Speed Memory

View page
US Patent:
47746548, Sep 27, 1988
Filed:
Dec 24, 1984
Appl. No.:
6/685527
Inventors:
James H. Pomerene - Chappaqua NY
Thomas R. Puzak - Cary NC
Rudolph N. Rechtschaffen - Scarsdale NY
Kimming So - Pleasantville NY
Assignee:
International Business Machines Corporation - Armonk NY
International Classification:
G06F 1208
G06F 1212
US Classification:
364200
Abstract:
A prefetching mechanism for a memory hierarchy which includes at least two levels of storage, with L1 being a high-speed low-capacity memory, and L2 being a low-speed high-capacity memory, with the units of L2 and L1 being blocks and sub-blocks respectively, with each block containing several sub-blocks in consecutive addresses. Each sub-block is provided an additional bit, called a r-bit, which indicates that the sub-block has been previously stored in L1 when the bit is 1, and has not been previously stored in L1 when the bit is 0. Initially when a block is loaded into L2 each of the r-bits in the sub-block are set to 0. When a sub-block is transferred from L1 to L2, its r-bit is then set to 1 in the L2 block, to indicate its previous storage in L1. When the CPU references a given sub-block which is not present in L1, and has to be fetched from L2 to L1, the remaining sub-blocks in this block having r-bits set to 1 are prefetched to L1. This prefetching of the other sub-blocks having r-bits set to 1 results in a more efficient utilization of the L1 storage capacity and results in a highter hit ratio.

Simultaneous Prediction Of Multiple Branches For Superscalar Processing

View page
US Patent:
54349850, Jul 18, 1995
Filed:
Aug 11, 1992
Appl. No.:
7/928851
Inventors:
Philip G. Emma - Danbury CT
Joshua W. Knight - Mohegan Lake NY
James H. Pomerene - Chappaqua NY
Thomas R. Puzak - Ridgefield CT
Assignee:
International Business Machines Corporation - Armonk NY
International Classification:
G06F 938
US Classification:
395375
Abstract:
System and method for predicting a multiplicity of future branches simultaneously (parallel) from an executing program, to enable the simultaneous fetching of multiple disjoint program segments. Additionally, the present invention detects divergence of incorrect branch predictions and provides correction for such divergence without penalty. By predicting an entire sequence of branches in parallel, the present invention removes restrictions that decoding of multiple instructions in a superscalar environment must be limited to a single branch group. As a result, the speed of today's superscalar processors can be significantly increased. The present invention includes three main embodiments: (1) the first embodiment is directed to a simplex multibranch prediction device, that can predict a plurality of branch groups in one cycle and provide early detection of wrong predictions; (2) the second embodiments is directed to a duplex multibranch prediction device that can detect divergence in a predicted stream, and provide redirection (correction) within the stream; and (3) the third embodiment is directed to an n-plex multibranch prediction device, that can predict n multiplicity of branch predictions simultaneously and provide an early detection of wrong predictions as well as correction of wrong predictions.

High Speed Buffer Store Arrangement For Quick Wide Transfer Of Data

View page
US Patent:
48232599, Apr 18, 1989
Filed:
Jun 23, 1988
Appl. No.:
7/213506
Inventors:
Frederick J. Aichelmann - Hopewell Junction NY
Rex H. Blumberg - Hyde Park NY
David Meltzer - Wappingers Falls NY
James H. Pomerene - Chappaqua NY
Thomas R. Puzak - Yorktown Heights NY
Rudolph N. Rechtschaffen - Scarsdale NY
Frank J. Sparacio - Bergen NJ
Assignee:
International Business Machines Corporation - Armonk NY
International Classification:
G06F 1200
US Classification:
364200
Abstract:
A high speed buffer store arrangement for use in a data processing system having multiple cache buffer storage units in a hierarchial arrangement permits fast transfer of wide data blocks. On each cache chip, input and output latches are integrated thus avoiding separate intermediate buffering. Input and output latches are interconnected by 64-byte wide data buses so that data blocks can be shifted rapidly from one cache hierarchy level to another and back. Chip-internal feedback connections from output to input latches allow data blocks to be selectively reentered into a cache after reading. An additional register array is provided so that data blocks can be furnished again after transfer from cache to main memory or CPU without accessing the respective cache. Wide data blocks can be transferred within one cycle, thus tying up caches much less in transfer operations, so that they have increased availability.
James B Pomerene from Rockford, IL, age ~76 Get Report