Dr. Barton E. Schaefer
2 Adobe Court
Novato, CA 94945
(415) 898-1623
<http://www.well.com/user/barts>

An updated copy of this resume can be found at <http://www.well.com/user/barts/bart.html>.

Experience

Chief Scientist, Zanshin, November 1996 -

Designed layered, backend-independent model for unified access to message store data from diverse sources including POP3, IMAP4 and databases. Implemented that design in portable STL C++ as the basis for LawnDart, a sophisticated multiprotocol electronic mail client. Designed and implemented portable C++ object-oriented access to c-tree Plus ® database system as backend for LawnDart.

Consultant, NetManage, Inc., August - October, 1996

Facilitated transition of the Z-Mail product line from NCD to NetManage. Trained software developers and wrote developers' documentation. Recommended product directions and development strategies to engineering and marketing management. Reconciled production processes with product marketing requirements and created new software utilities to integrate Z-Mail into NetManage manufacturing.

Vice President, Messaging Technology and Architecture, NCD Software Corporation, June 1995 - July 1996.

Advised executive management on messaging technology issues, system design and architecture, and new product directions. Investigated applicability of new technologies and standards to NCD Software products. Worked with all levels of messaging division engineering and marketing staff on product issues and ongoing development.


Director, Messaging Technology, Network Computing Devices, Inc., and Vice President, Technology, Z-Code Division, February 1994 - May 1995.

Worked with marketing and engineering management on product planning and design. Coordinated integration of new features and technologies into UNIX, Windows, and Macintosh versions of Z-Mail, working closely with product engineering teams at all levels from managers to individual engineers.

My role at Z-Code Division transitioned into a similar role within NCD Software when NCD reorganized its hardware and software divisions into separate businesses.


Vice President, Engineering, Z-Code Software Corporation, October 1990 - February 1994.

Co-founder of Z-Code Software. Developed and extended Z-Script message management language and interpreter, and other Z-Mail internals. Contributed to design and implementation of Z-Mail X11/Motif graphical user interface. Later assumed all engineering responsibility for UNIX/Motif development of Z-Mail, including portability to more than twenty UNIX platforms. Designed host-locked and floating-token product licensing system for Z-Mail.

Supervised growth of Z-Code engineering staff from two programmers to twelve persons, including managers, QA and documentation staff, over a three-year period. Coordinated efforts of independent programmers developing Z-Mail user interfaces for character terminals, X11/OpenLook, Macintosh, and Windows. Provided technical assistance to emerging sales and product support departments.


Research Assistant, Oregon Graduate Institute, September 1984 - September 1990.

Conducted multiprocessor programming and computer graphics research full or part time depending on school year. Acted as Teaching Assistant for Introduction to Computer Graphics, 1986, 1987, and 1988.


Computer Programmer, Microfield Graphics, Inc., Summer 1985.

Wrote graphics demonstration programs on the IBM-PC AT for use in advertising, using Lattice C and Microfield's T4 graphics board.


Additional Experience


Vice President and Secretary of the Board, Z-Code Software Corporation, October 1990 - February 1994.

Member of the Board of Directors, Zanshin, November 1996 -

Education

Oregon Graduate Institute of Science and Technology, Beaverton, Oregon, September 1984 - September 1990.

Ph.D., Computer Science and Engineering, October 1990.

Cornell College, Mt. Vernon, Iowa, September 1980 - May 1984.

B.S.S. in Computer Science, Summa Cum Laude, GPA 3.95 / 4.


Doctoral Dissertation

Title: A Technique for Fine-Grained Asynchronous Concurrency Through Parallel Graph Reduction.
Advisor: Dr. Richard B. Kieburtz.

Proposed a model for graph reduction on massively parallel computers using fine-grained tasks, through priority-scheduled speculative computation. Performed experiments in distributed dynamic task scheduling using diffusion techniques. Implemented parallel simulation of the massively parallel reducer and reported results of test cases. See dissertation summary below.


Research Interests

Dynamic distributed scheduling; Parallel algorithms, programming, and languages; Functional programming; Operating systems; Networks and network applications, especially electronic mail; Computer graphics and graphical interface systems.

Publications

Development and Performance of Parallel Algorithms for Multiprocessor Systems, B. Schaefer and S. S. Thakkar, OGI Tech Report CS/E 86-??? (no longer available).

A Model for Execution of PARLOG on a Distributed Processor Network, B. Schaefer and P. Borgwardt, OGI Tech Report CS/E 87-007.

Honors and Awards

First Place, Computer Science and Engineering Student Research Symposium, 1987.

Clark Foundation Fellow, Oregon Graduate Center, 1985.

Honorable Mention, National Science Foundation Graduate Fellowship Program, 1985.

Phi Beta Kappa, Cornell College, 1984.

Barton Award, Cornell College, 1983.

Norton Award, Cornell College, 1982.

Chandler Award in Social Science, Cornell College, 1981.

Third Place, Cornellian Fiction Contest, 1981.

William Fletcher King Scholarship, Cornell College, 1980-84.


Dissertation Summary

A Technique for Fine-Grained Asynchronous Concurrency Through Parallel Graph Reduction

Parallel execution of programs has become increasingly important as it becomes easier to build multiprocessor computers. Functional languages provide a good framework for parallel programming because they can express parallelism implicitly. It is thus no more difficult to reason about the correctness of a parallel functional program than a sequential one. However, success has been limited in the effort to exploit the fine-grained asynchronous concurrency that is found in many such programs. Current Multiple-Instruction Multiple-Data (MIMD) machines are not well suited to fine-grain concurrent programs, but new hardware technology such as Dally's message-driven processor and the MIT Monsoon processor promise massively parallel MIMD systems in the near future. Techniques are needed to provide the large numbers of concurrent tasks that will best utilize such machines, without allowing runaway concurrency to swamp processor memory and other resources.

Parallelism is present in functional programs because the data dependencies within an expression may permit simultaneous evaluation of several parts of the expression. Sometimes it is clear that the value of a subexpression will be useful, and a conservative parallel strategy will evaluate only these subexpressions. However, to quickly provide the large numbers of tasks desirable in massively parallel machines, it may be necessary to attempt to evaluate expressions whose value is not known to be needed. To prevent such evaluations from ``stealing'' too much processor time from useful tasks, subexpressions are tagged with an evaluation priority. At each node, evaluations will be scheduled according to this priority. Sub-expressions whose value is known to be required can be given higher priority than those whose value may not be needed. When it is discovered that the value of an expression is not needed, it can be garbage collected.

The combination of prioritized scheduling with graph reduction evaluation can be used to exploit speculative asynchronous parallelism in functional programs. The graph reduction technique was inspired by a Single-Instruction Multiple-Data (SIMD) combinator reduction algorithm described by Hillis for the Connection Machine. A virtual processor is allocated to every reducible combinator expression in the program graph. Each processor node of a MIMD machine can be multiplexed as necessary to implement these virtual processors. At each node, task scheduling is viewed as an instruction pipeline. Some tasks may be waiting for communications to complete, but these tasks are not allowed to block the pipeline. Instead, they are placed in a pool, returning to the queue of ready tasks only when they have received the awaited communication. As long as there are sufficiently many ready tasks to keep the pipeline full, performance will not suffer as a result of communication delays. When there are more tasks than are needed to fill the pipeline, the excess tasks can be offloaded to other processors. Diffusion scheduling, a heuristic method for dynamic distribution of workload in a multiprocessor system, is used to distribute tasks among processors.

The thesis develops a set of latency equations to determine the appropriate task pool size for pipelined prioritized scheduling. An algorithm for control of speculative parallelism and collection of unneeded resources is also described. These techniques have been applied to the design of a parallel combinator reduction system. The construction of a parallel simulator for this system is then described, and results of simulations on several test programs are discussed.