Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Computing architecture with peripherals

a computing architecture and peripheral technology, applied in the direction of digital storage, memory adressing/allocation/relocation, instruments, etc., can solve the problems of severe real-time problems and unwanted timing interferen

Inactive Publication Date: 2016-09-22
SYNAPTIC LAB
View PDF10 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

This architecture significantly reduces timing interference and ensures upper-bound worst-case execution time analysis, improving the performance and reliability of memory transfer operations in multi-core systems by decoupling memory access times and maintaining cache coherency across processors and peripherals.

Problems solved by technology

Many shared memory computing devices with multiple bus-masters / interconnect-masters, such as the European Space Agencies' Next Generation Microprocessor architecture [3] experience severe real-time problems [4].
For example, unwanted timing interference can occur by memory transfer requests issued by other cores and bus master peripherals to the level 2 cache module and SDRAM.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Computing architecture with peripherals
  • Computing architecture with peripherals
  • Computing architecture with peripherals

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0150]FIG. 1 is a block schematic diagram illustrating portions of a shared memory computing architecture (300) for preferred embodiments of the present invention. Shared memory computing architecture (300) comprises 5 unidirectional interconnect bridges (350, 351, 352, 353, 354). Each unidirectional interconnect bridge (350, 351, 352, 353, 354) comprises:[0151]an interconnect target port ({350.ti, 350.te}, {351.6, 351.te}, {352.ti, 352.tc}, {350.ti, 353.te}, {354.ti, 354.te}) comprising:[0152]to an ingress port (350.ti, 351.ti, 352.ti, 353.ti, 354.ti); and[0153]an egress port (350.tc, 351.te, 352.tc, 353.tc, 354.te);[0154]an interconnect master port ({350.mi, 350.me}, {351.mi, 351.me}, {352.mi, 352.me}, {353.mi, 353.me}, {354.mi, 354.me}) comprising:[0155]an ingress port (350.mi, 351.mi, 352.mi, 353.mi, 354.mi); and[0156]an egress port (350.me, 351.me, 352.me, 353.me, 354.me);[0157]a memory transfer request module (330, 332, 334, 336, 338) comprising:[0158]an ingress port (350.ti, ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A shared memory computing device optimised for worst case execution time analysis that has at least one interconnect master, N cache modules and N processor cores. Each cache module has a finite state machine that employs an update-type cache coherency policy. Each processor core is assigned a different one of the N fully associative cache modules as its private cache. Furthermore, the execution time of memory transfer requests issued by each of the N processor cores is not modified by: (a) the unrelated memory transfer requests issued by any of the other N processor cores; or (b) the unrelated memory transfer requests issued by at least one other interconnect master.

Description

FIELD OF THE INVENTION[0001]The present invention relates to multi interconnect master computing architectures and is particularly applicable to real-time and mixed-criticality computing involving peripherals.BACKGROUND OF THE INVENTION[0002]Throughout this specification, including the claims:[0003]a bus master is a type of interconnect master;[0004]to a bus target / stave is a type of an interconnect target;[0005]a memory store coupled with a memory controller may be described at a higher level of abstraction as a memory store;[0006]a peripheral may or may not have I / O pins;[0007]a peripheral is connected to an interconnect that transports memory transfer requests;[0008]a peripheral may be memory mapped, such that a memory transfer request to the interconnect target port of a peripheral is used to control that peripheral;[0009]a processor core may be remotely connected to an interconnect over a bridge; and[0010]a definition and description of domino timing effects can be found in [1]...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(United States)
IPC IPC(8): G06F12/08G06F13/16
CPCG06F12/0815G06F12/0811G06F13/1663G06F2212/6032G06F12/0864G06F2212/283G06F12/082G06F13/20G06F13/4068G06F13/364G06F13/4282G06F13/372G06F12/0831G06F2212/621G11C7/1072G06F3/0619G06F3/065G06F3/067
Inventor GITTINS, BENJAMIN AARON
Owner SYNAPTIC LAB
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products