Chi-Hua Chang - Milpitas CA Man Dieu Trinh - San Jose CA
Assignee:
Intel Corporation - Santa Clara CA
International Classification:
G06F 15173
US Classification:
709223, 709229
Abstract:
The present invention provides application programming interfaces (APIs) which allow a host to control the functioning of a network processor and also perform various network data manipulation functions. The APIs are intended to encapsulate as much as possible the underlying messaging between the host system and the network processor and to hide the low device level command details from the host. The APIs are provided by a program module. A host may invoke an API which is then communicated by the program module to the network processor where functions corresponding to the API are performed. Responses to the APIs may be forwarded back to the host. Asynchronous callback functions, invoked in response to the API calls, may be used to forward responses to the host.
Application Programming Interfaces And Methods Enabling A Host To Interface With A Network Processor
Chi-Hua Chang - Milpitas CA Man Dieu Trinh - San Jose CA
Assignee:
Intel Corporation - Santa Clara CA
International Classification:
G06F 15173
US Classification:
709223, 709229
Abstract:
The present invention provides application programming interfaces (APIs) which allow a host to control the functioning of a network processor and also perform various network data manipulation functions. The APIs are intended to encapsulate as much as possible the underlying messaging between the host system and the network processor and to hide the low device level command details from the host. The APIs are provided by a program module. A host may invoke an API which is then communicated by the program module to the network processor where functions corresponding to the API are performed. Responses to the APIs may be forwarded back to the host. Asynchronous callback functions, invoked in response to the API calls, may be used to forward responses to the host.
Two-Dimensional Queuing/De-Queuing Methods And Systems For Implementing The Same
Simon Chong - Fremont CA Anguo Tony Huang - Mountain View CA Man Dieu Trinh - San Jose CA
Assignee:
Intel Corporation - Santa Clara CA
International Classification:
H04L 1228
US Classification:
370412, 370413, 709238, 710 52
Abstract:
Systems and methods for queuing and de-queuing packets in a two-dimensional link list data structure. A network processor processes data for transmission for a plurality of Virtual Connections (VCs). The processor creates a two-dimensional link list data structure for each VC. The data field of each data packet is stored in one or more buffer memories. Each buffer memory has an associated buffer descriptor that includes a pointer to the location of the buffer memory, and a pointer pointing to the memory of the next buffer descriptor associated with a buffer memory storing data for the same packet. Each data packet also has an associated packet descriptor including a pointer pointing to the memory location of the first buffer descriptor associated with that packet, and a pointer pointing to the memory location of the packet descriptor associated with the next data packet queued for transmission. A VC descriptor for each VC keeps track of the memory locations of the next packet descriptor and the next buffer descriptor to be de-queued, and the memory locations for storing the next packet descriptors and the next buffer descriptors to be queued.
Vertical Instruction And Data Processing In A Network Processor Architecture
Barry Lee - Union City CA, US Man Dieu Trinh - San Jose CA, US Ryszard Bleszynski - Saratoga CA, US
Assignee:
Bay Microsystems, Inc. - San Jose CA
International Classification:
H04L 12/54
US Classification:
370429, 370412, 370465
Abstract:
An embodiment of this invention pertains to a network processor that processes incoming information element segments at very high data rates due, in part, to the fact that the processor is deterministic (i. e. , the time to complete a process is known) and that it employs a pipelined “multiple instruction single date” (“MISD”) architecture. This MISD architecture is triggered by the arrival of the incoming information element segment. Each process is provided dedicated registers thus eliminating context switches. The pipeline, the instructions fetched, and the incoming information element segment are very long in length. The network processor includes a MISD processor that performs policy control functions such as network traffic policing, buffer allocation and management, protocol modification, timer rollover recovery, an aging mechanism to discard idle flows, and segmentation and reassembly of incoming information elements.
Barry Lee - Union City CA, US Man Dieu Trinh - San Jose CA, US
Assignee:
Bay Microsystems, Inc. - San Jose CA
International Classification:
H04L 12/28
US Classification:
37039521, 3703955
Abstract:
A differentiated services device is described. In one embodiment, the differentiated services device includes: a traffic metering unit to indicate whether an information element in a flow conforms to a peak rate and a committed rate; a storage congestion metering unit to determine whether the information element should be accepted or discarded; and a marking unit to mark the information element with one of a plurality of mark values, wherein the marking unit is coupled to the traffic metering unit and the storage congestion metering unit. Also, a method of marking an information element in a flow is described. In one embodiment, the method includes: indicating whether the information element in the flow conforms to a peak rate and a committed rate; determining whether the information element should be accepted or discarded; and marking the information element with one of a plurality of mark values.
Two-Dimensional Queuing/De-Queuing Methods And Systems For Implementing The Same
Simon Chong - Fremont CA, US Anguo Tony Huang - Mountain View CA, US Man Dieu Trinh - San Jose CA, US
Assignee:
Intel Corporation - Santa Clara CA
International Classification:
H04L 12/28
US Classification:
370412, 37039572
Abstract:
Systems and methods for queuing and de-queuing packets in a two-dimensional link list data structure. A network processor processes data for transmission for a plurality of Virtual Connections (VCs). The processor creates a two-dimensional link list data structure for each VC. The data field of each data packet is stored in one or more buffer memories. Each buffer memory has an associated buffer descriptor that includes a pointer to the location of the buffer memory, and a pointer pointing to the memory of the next buffer descriptor associated with a buffer memory storing data for the same packet. Each data packet also has an associated packet descriptor including a pointer pointing to the memory location of the first buffer descriptor associated with that packet, and a pointer pointing to the memory location of the packet descriptor associated with the next data packet queued for transmission. A VC descriptor for each VC keeps track of the memory locations of the next packet descriptor and the next buffer descriptor to be de-queued, and the memory locations for storing the next packet descriptors and the next buffer descriptors to be queued.
Systems And Methods For On-Chip Storage Of Virtual Connection Descriptors
Simon Chong - Fremont CA David A. Stelliga - Pleasanton CA Ryszard Bleszynski - Cupertino CA Anguo Tony Huang - Mountain View CA Man Dieu Trinh - San Jose CA
Assignee:
Intel Corporation - Santa Clara CA
International Classification:
G06F 15167
US Classification:
709212
Abstract:
Systems and methods for storing, or caching, VC descriptors on a single-chip network processor to enhance system performance. The single-chip network processor includes an on-chip cache memory that stores VC descriptors for fast retrieval. When a VC descriptor is to be retrieved, a processing engine sends a VC descriptor identifier to a content-addressable memory (CAM), which stores VC descriptor identifiers in association with addresses in the cache where associated VC descriptors are stored. If the desired VC descriptor is stored in the cache, the CAM returns the associated address to the processing engine and the processing engine retrieves the VC descriptor from the cache memory. If the VC descriptor is not stored in the cache, the CAM returns a miss signal to the processing engine, and the processing engine retrieves the VC descriptor from an off-chip memory. In this manner, VC descriptors associated with high bandwidth VCs are stored to the cache and retrieved much quicker from the cache than from the off-chip memory.
Googleplus
Man Trinh
Man Trinh
Man Trinh
Youtube
Mn trnh din thi trang: Ti hin i sng vn ha thi...
Nhng b trang phc, o c hay phong thi mang dng dp ca ho kh ng a, ca i sn...
Duration:
18m 22s
S tht "Mn Trinh" ca ngi ph n nm u?
Xin cho mi ngi. Chc mi ngi xem video vui v v nu thy video v ngha th m...
Duration:
12m 4s
BS NGUYN TH LUYN - QUY TRNH V LU KHI V MNG T...
ng k ngay nhn u i 50% chi ph th thut. Zalo: Zalo: 086.808.6696.....
Duration:
3m 42s
QUAN H BNG TAY C MT TRINH HAY KHNG?|Chuyn Thm...
Cho mng bn n knh YouTube ca ti! Ti l Vera Thin n - chuyn gia t vn tm ...
Duration:
10m
Cch Lm Ch Phm EM - Men Vi Sinh Ti Nh Toan Trinh
Cch Lm Ch Phm EM - Men Vi Sinh Ti Nh Toan Trinh Subscribe / ng k Toan ...
Duration:
23m 56s
5 Hnh dng Mng Trinh ca ph n v nhng b mt | Tha...
5 Hnh dng Mng Trinh ca ph n v nhng b mt | Thanh Hng Official #thanhhuo...