Parallel Scheduling with Batching
This work provides and analyses three provably good parallel scheduling algorithms. Each algorithm utilizes batching by delaying work until it can be executed as part of a larger batch. Batching is often necessary for good performance in problems that have a high start-up cost to do any work or reduced cost for a large group of work. These problems are often challenging because the delay of work conflicts with standard scheduling constraints such as deadlines as well as useful scheduling objectives such as makespan and flow.Specifically, this dissertation focuses on two scheduling problems: dynamically multithreaded computations with implicit batching and Integrated Stockpile Evaluation (ISE). The former is an online scheduling problem with precedence constraints where the delay of data structure operations can be used to reduce total work and increase parallelism with batched operations. ISE is a traditional offline scheduling problem where n jobs, each with an arbitrary release time, must be scheduled non-preemptively on m machines. It has the extra constraint that machines may only be used if they have been recently calibrated and batching can be used to reduce the necessary number of calibrations. In both cases, competitive scheduling algorithms must appropriately balance the benefit of batching against the cost of delaying work.
MetadataShow full item record
Showing items related by title, author, creator and subject.
Parallel Architecture, Parallel Acquisition: Cross-Linguistic Evidence from Nominal and Verbal Domains Sutton, Brett Royal (Georgetown University, 2017)This dissertation explores parallels between Complementizer Phrase (CP) and Determiner Phrase (DP) semantics, syntax, and morphology–including similarities in case-assignment, subject-verb and possessor-possessum agreement, ...