Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Scalable shared-memory multiprocessor architectures
Thakkar S., Dubois M., Laundrie A., Sohi G. (ed) Computer23 (6):71-73,1990.Type:Article
Date Reviewed: Jul 1 1991
Comparative Review

Thakkar, Dubois, Laundrie, and Sohi

The authors briefly survey shared-memory multiprocessor hardware architectures, emphasizing the current main directions of research. They do not discuss distributed multiprocessor architectures such as NCube or iPSC. For shared-memory architectures, the authors mention network switching-based architectures (such as BBN’s Butterfly) and bus-based architectures (such as Sequent’s Symmetry). They say very little about network switching-based architectures, however, and instead focus on directory-based and bus-based schemes for maintaining coherence (providing for the integrity of data shared among the processors during computation).

After quickly reviewing four coherence properties that are incorporated in most protocols for maintaining coherence in shared-memory architectures, the authors summarize the use of presence flags, B pointers, and linked lists as bases for alternative protocols. As an example of the linked list approach, they mention the IEEE Scalable Coherent Interface project.

Protocols for maintaining coherence become more complex and voluminous as the number of processors (and their associated ports and cache memories) increases. The reason for being concerned with coherence is to attempt to avoid a more-than-proportional increase in the complexity and volume of the protocol as the number of processors increases (the “scale” relationship). Another approach to seeking a favorable scale relationship is to modify the hardware for the processor connections. In this area, the authors review bus-based schemes, emphasizing multiple-bus and hierarchical-bus systems. They briefly mention various proposals for differing topologies and roles for the processor connections for enabling access to shared memory.

While the authors profess to have had a lot of help in preparing this survey, the result is not well-balanced. The purpose was to provide a context for three subsequent short papers, one on an example of a bus-based scheme (the Aquarius multiple-bus multiprocessor architecture) and two on the linked-list variety of directory-based schemes (the SCI at the Universities of Oslo and Wisconsin and the SDD protocol at Stanford University). The context could have been better set by leaving fewer loose ends, by being more consistent in the use of terminology, and by being more direct about the complexity supposedly being mitigated.

From the terminology and the references, the multiprocessor hardware and protocol people are clearly not talking with the software database people. While significant parallels exist in the situations and problems they face, as well as in the general character of the resources they can marshal, each group seems to be trying to proceed as though the other had little to offer. I see very capable people in both groups, but they are not in touch with each other.

James, Laundrie, Gjessing, and Sohi

The aim of the Scalable Coherent Interface (SCI), IEEE standards project P1596, is to define an extended computer backplane enabling access to a shared memory, scalable up to 64K nodes with a transfer rate of one gigabyte per second per node. Nodes may be processors, memories, or input-output ports in any mix.

The approach taken thus far is to use a distributed directory; linked lists; cache memory; point-to-point unidirectional connections for the communication of packets; and techniques emphasizing reliability, fault recovery, and optimization for high-frequency transactions. The definition work is being done by simulation, with participation by a group at the University of Oslo and a group at the University of Wisconsin. The SCI-P1596 chair is David B. Gustavson of Stanford University.

The bulk of the paper discusses some of the list handling done for common anticipated situations. The discussion of how the proposed list handling differs from the usual bidirectionally linked list handling for queues and stacks seems weak. The bibliography is disappointingly skimpy.

Thapar and Delagi

The authors report on their work on a distributed-directory scheme for shared-memory multiprocessors. Singly-linked lists are the fundamental data structures used to help provide coherence in the access to shared data. The authors use most of the paper to describe basic list operations performed on the distributed queues; they also contrast their work with the Wisconsin-Oslo SCI work.

While this work appears to be more complex than the SCI work, the authors also apparently assume fewer restrictions on the hardware configuration. While they offer some words of contrast, I would have liked to read how they see their list operations as differing from the usual and what they see as the tradeoffs on coherence for their proposed protocols.

Carlton and Despain

In the Aquarius scalable multiple-bus shared-memory hardware architecture, each node has access to two or more buses arranged in a multidimensional array and serving as a network. Access to the network is provided only for nodes; each node has memory, a cache, and a processor. Part of the memory is used for a portion of a distributed directory to provide coherence in processing shared data. Shared data are held in cache, except at the “root node” for the data. The root node can have private (unshared) data. Nodes can share data most quickly when they are on the same bus. Cache states and directory states are distributed, with each node showing the states only for the data it has.

The authors give a clear summary of their proposed “multi-multi” architecture and protocol, including a few rough quantitative measures of scalability. They give no feel for the tradeoffs and compromises, and the list of references is helpful only on history. The authors only briefly discuss how they visualize the protocol working for widely shared data, a point I would have liked to read more about.

Reviewer:  Ned Chapin Review #: CR115416
Comparative Review
This review compares the following items:
  • Scalable shared-memory multiprocessor architectures:
  • Scalable coherent interface:
  • Stanford distributed-directory protocol:
  • Aquarius project:
  • Bookmark and Share
      Featured Reviewer  
     
    Shared Memory (B.3.2 ... )
     
     
    Cache Memories (B.3.2 ... )
     
     
    Directory Structures (D.4.3 ... )
     
     
    Lists, Stacks, And Queues (E.1 ... )
     
     
    Design Styles (B.3.2 )
     
     
    Multiple Data Stream Architectures (Multiprocessors) (C.1.2 )
     
      more  
    Would you recommend this review?
    yes
    no
    Other reviews under "Shared Memory": Date
    Memory coherence in shared virtual memory systems
    Li K., Hudak P. ACM Transactions on Computer Systems 7(4): 321-359, 1989. Type: Article
    Oct 1 1990
    Multigrain shared memory
    Yeung D., Kubiatowicz J., Agarwal A. ACM Transactions on Computer Systems 18(2): 154-196, 2000. Type: Article
    May 1 2001
    Reducing Contention in Shared-Memory Multiprocessors
    Stenström P. Computer 21(11): 26-37, 1988. Type: Article
    Jun 1 1989
    more...

    E-Mail This Printer-Friendly
    Send Your Comments
    Contact Us
    Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
    Terms of Use
    | Privacy Policy