The objective of this thesis is to improve the high frequency performance of components and filters by better compensating the parasitic effects of practical components. The main application for this improvement is in design of low pass filters for power electronics, although some other applications will be presented. In switching power supplies the input and output filters must attenuate frequencies related to the fundamental switching frequency of the converter. The filters represent a major contribution to the weight, volume and price of the power supply. Therefore, aspects of the design of the switching power converter, especially those related to the switching frequency, are limited by the high frequency performance of the filters. The usual methods of improving the high frequency performance of the filter includes using larger, better components. Filter performance can improve by using higher quality inductors and capacitors or by adding high frequency capacitors in parallel with the filter capacitor. Also, an additional filter stage can be added. All of these methods add significant cost to the design of the power supply. If the effect of high-frequency parasitic elements in the components can be reduced (at a low cost) the performance of the filter can be enhanced. This allows the development of filters with much better high frequency attenuation...
Signature schemes are fundamental cryptographic primitives, useful as a stand-alone application, and as a building block in the design of secure protocols and other cryptographic objects. In this thesis, we study both the uses that signature schemes find in protocols, and the design of signature schemes suitable for a broad range of applications. An important application of digital signature schemes is an anonymous credential system. In such a system, one can obtain and prove possession of credentials without revealing any additional information. Such systems are the best means of balancing the need of individuals for privacy with the need of large organizations to verify that the people they interact with have the required credentials. We show how to construct an efficient anonymous credential system using an appropriate signature scheme; we then give an example of such asignature scheme. The resulting system is the first one with satisfactory communication and computation costs. The signature scheme we use to construct an anonymous credential system is of independent interest for use in other protocols. The special property of this signature scheme is that it admits an efficient protocol for a zero-knowledge proof of knowledge of a signature. Further...
This dissertation describes the design and evaluation of the Fast, Flexible Forwarding system (F3), a distributed system for disseminating information to networked subscribers. It examines existing subscription approaches, proposes F3 as an alternative to these approaches, and presents results from comparisons of F3 and other subscription approaches. Existing subscription approaches examined in the dissertation fall into three categories: unicast, single-identifier multicast, and content-based multicast systems. Careful examination of these approaches suggests that none is able to support complex subscription requests from large numbers of subscribers at high data rates. F3, the systems proposed as an alternative, shares many features with other multicast systems. Like many multicast systems, for example, F3 uses an overlay network of routers to distribute messages to subscribers. F3 differs from other systems, however, in its use of preprocessors to analyze messages before routing begins. Preprocessors carry out analyses of the relationships between subscription topics, and store the results in special content graph data-structures. Preprocessors share the results of their analyses by distributing content graphs to routers in the F3 network. Using content graphs...
Millions of Americans suffer from pulmonary diseases. According to recent statistics, approximately 17 million people suffer from asthma, 16.4 million from chronic obstructive pulmonary disease, 12 million from sleep apnea, and 1.3 million from pneumonia - not to mention the prevalence of many other diseases associated with the lungs. Annually, the mortality attributed to pulmonary diseases exceeds 150,000. Clinical signs of most pulmonary diseases include irregular breathing patterns, the presence of abnormal breath sounds such as wheezes and crackles, and the absence of breathing entirely. Throughout the history of medicine, physicians have always listened for such sounds at the chest wall (or over the trachea) during patient examinations to diagnose pulmonary diseases - a procedure also known as auscultation. Recent advancements in computer technology have made it possible to record, store, and digitally process breath sounds for further analysis. Although automated techniques for lung sound analysis have not been widely employed in the medical field, there has been a growing interest among researchers to use technology to understand the subtler characteristics of lung sounds and their potential correlations with physiological conditions. Based on such correlations...
The cerebrocerebellar system has been known to be a central part in human motion control and execution. However, engineering descriptions of the system, especially in relation to lower body motion, have been very limited. This thesis proposes an integrated hierarchical neural model of sagittal planar human postural balance and biped walking to 1) investigate an explicit mechanism of the cerebrocerebellar and other related neural systems, 2) explain the principles of human postural balancing and biped walking control in terms of the central nervous systems, and 3) provide a biologically inspired framework for the design of humanoid or other biomorphic robot locomotion. The modeling was designed to confirm neurophysiological plausibility and achieve practical simplicity as well. The combination of scheduled long-loop proprioceptive and force feedback represents the cerebrocerebellar system to implement postural balance strategies despite the presence of signal transmission delays and phase lags. The model demonstrates that the postural control can be substantially linear within regions of the kinematic state-space with switching driven by sensed variables.; (cont.) A improved and simplified version of the cerebrocerebellar system is combined with the spinal pattern generation to account for human nominal walking and various robustness tasks. The synergy organization of the spinal pattern generation simplifies control of joint actuation. The substantial decoupling of the various neural circuits facilitates generation of modulated behaviors. This thesis suggests that kinematic control with no explicit internal model of body dynamics may be sufficient for those lower body motion tasks and play a common role in postural balance and walking. All simulated performances are evaluated with respect to actual observations of kinematics...
This thesis examines two distinct but related problems in low-level computer vision: color constancy and blind-image deconvolution. The goal of the former is to separate the effect of global illumination from other properties of an observed image, in order to reproduce the effect of observing a scene under purely white light. For the latter, we consider the specific instance of deblurring, in which we seek to separate the effect of blur caused by camera motion from all other image properties in order to produce a sharp image from a blurry one. Both problems share the common characteristic of being bilinear inverse problems, meaning we wish to invert the effects of two variables confounded by a bilinear relationship, and of being underconstrained, meaning there are more unknown parameters than known values. We examine both problems in a Bayesian framework, utilizing real-world statistics to perform our estimation. We also examine the role of spatial evidence as a source of information in solving both problems. The resulting blind image deconvolution algorithm produces state-of-the art results. The color constancy algorithm produces slightly improved results over the standard Bayesian approach when spatial information is used. We discuss the properties of and distinctions between the two problems and the solution strategies employed.; by Barun Singh.; Thesis (S.M.)--Massachusetts Institute of Technology...
As intelligent environments (IEs) move from simple kiosks and meeting rooms into the everyday offices, kitchens, and living spaces we use, the need for these spaces to communicate not only with users, but also with each other, will become increasingly important. Users will want to be able to shift their work environment between localities easily, and will also need to communicate with others as they move about. These IEs will thus require two pieces of infrastructure: a knowledge representation (KR) which can keep track of people and their relationships to the world; and a communication mechanism so that the IE can mediate interactions. This thesis seeks to define, explore and evaluate one way of creating this infrastructure, by creating societies of agents that can act on behalf of real-world entities such as users, physical spaces, or informal groups of people. Just as users interact with each other and with objects in their physical location, the agent societies interact with each other along communication channels organized along these same relationships. By organizing the infrastructure through analogies to the real world, we hope to achieve a simpler conceptual model for the users, as well as a communication hierarchy which can be realized efficiently.; by Stephen L. Peters.; Thesis (Ph. D.)--Massachusetts Institute of Technology...
Dynamic invariant detection is the identification of the likely properties about a program based on observed variable values during program execution. While other dynamic invariant detectors use a brute force algorithm, Daikon adds powerful optimizations to provide more scalable invariant detection without sacrificing the richness of the reported invariants. Daikon improves scalability by eliminating redundant invariants. For example, the suppression optimization allows Daikon to delay the creation of invariants that are logically implied by other true invariants. Although conceptually simple, the implementation of this optimization in Daikon has a, large fixed cost and scales polynomially with the number of program variables. I investigated performance problems in two implementations of the suppression optimization in Daikon and evaluated several methods for improving the algorithm for the suppression optimization: optimizing existing algorithms, using a hybrid, context-sensitive approach to maximize the effectiveness of the two algorithms, and batching applications of the algorithm to lower costs. Experimental results showed a 10% runtime improvement in Daikon runtime. In addition, I implemented an oracle to verify the implementation of these improvements and the other optimizations in Daikon.; by Chen Xiao.; Thesis (M. Eng.)--Massachusetts Institute of Technology...
The tracking of visual phenomena is a problem of fundamental importance in computer vision. Tracks are used in many contexts, including object recognition, classification, camera calibration, and scene understanding. However, the use of such data is limited by the types of objects we are able to track and the environments in which we can track them. Objects whose shape or appearance can change in complex ways are difficult to track as it is difficult to represent or predict the appearance of such objects. Furthermore, other elements of the scene may interact with the tracked object, changing its appearance, or hiding part or all of it from view. In this thesis, we address the problem of tracking deformable, dynamically textured regions under challenging conditions involving visual clutter, distractions, and multiple and prolonged occlusion. We introduce a model of appearance capable of compactly representing regions undergoing nonuniform, nonrepeating changes to both its textured appearance and shape. We describe methods of maintaining such a model and show how it enables efficient and effective occlusion reasoning. By treating the visual appearance as a dynamically changing textured region, we show how such a model enables the tracking of groups of people. By tracking groups of people instead of each individual independently...
There are two obvious ways to map a two-dimension relational database table onto a one-dimensional storage interface: store the table row-by-row, or store the table column-by-column. Historically, database system implementations and research have focused on the row-by row data layout, since it performs best on the most common application for database systems: business transactional data processing. However, there are a set of emerging applications for database systems for which the row-by-row layout performs poorly. These applications are more analytical in nature, whose goal is to read through the data to gain new insight and use it to drive decision making and planning. In this dissertation, we study the problem of poor performance of row-by-row data layout for these emerging applications, and evaluate the column-by-column data layout opportunity as a solution to this problem. There have been a variety of proposals in the literature for how to build a database system on top of column-by-column layout. These proposals have different levels of implementation effort, and have different performance characteristics. If one wanted to build a new database system that utilizes the column-by-column data layout, it is unclear which proposal to follow. This dissertation provides (to the best of our knowledge) the only detailed study of multiple implementation approaches of such systems...
(cont.) selectively delegates authority to processes running on remote machines that need to access other resources. The delegation mechanism lets users incrementally construct trust policies for remote machines. Measurements of the system demonstrate that the modularity of REX's architecture does not come at the cost of performance.; A challenge in today's Internet is providing easy collaboration across administrative boundaries. Using and sharing resources between individuals in different administrative domains should be just as easy and secure as sharing them within a single domain. This thesis presents a new authentication service and a new remote login and execution utility that address this challenge. The authentication service contributes a new design point in the space of user authentication systems. The system provides the flexibility to create cross-domain groups in the context of a global, network file system using a familiar, intuitive interface for sharing files that is similar to local access control mechanisms. The system trades off freshness for availability by pre-fetching and caching remote users and groups defined in other administrative domains, so the file server can make authorization decisions at file-access time using only local information. The system offers limited privacy for group lists and has all-or-nothing delegation to other administrative domains via nested groups. Experiments demonstrate that the authentication server scales to groups with tens of thousands of members. REX contributes a new architecture for remote execution that offers extensibility and security. To achieve extensibility...
Intelligent autonomous agents working cooperatively accomplish tasks more efficiently than single agents, as well as tasks that were infeasible for the single agent. For example, a single agent transporting a ton of blocks will need to take multiple trips, possibly even more trips than it can make, and will take longer than several agents transporting the blocks in parallel. Unexpected events can result in the need for agents to change their plans in order to adapt. During replanning, the expense of communication can hinder the amount of information passed. In this thesis, agents reduce the communication load while adapting to environmental events by examining what changes will be made to teammate plans and by passing information only on a need-to-know basis. More specifically, in this thesis, we describe a method whereby cooperating agents use identical planners and knowledge of the other agents' capabilities in order to pass information about the environmental conditions they observe that are necessary for their teammates to infer the correct actions to take. The agent must also pass conditions only to the teammates who are affected by the changes. Given that not all agents will have the same information about environmental conditions...
Power efficiency is a capital issue in the study of mobile wireless nodes owing to constraints on their battery size and weight. A careful examination of the power consumption in low-power nodes shows that, as the total power available to such nodes decreases, the ratio of power consumed for transmission purposes to the power consumed on other non-transmission processes also decreases. The latter power therefore constitutes a considerable fraction of the total power available to such devices. We perform our study in terms of power per unit of time, or energy. Traditional in- formation theoretic energy constraints consider only the energy used for transmission purposes. We study optimal transmission strategies by explicitly taking into account the energy expended by processes other than transmission, that run when the transmitter is in the 'on' state. We term this energy by 'processing energy'. We first derive the capacity of a single user Additive White Gaussian Noise (AWGN) channel in the presence of processing energy. We prove that, unlike the case where only transmission energy is taken into account, achieving capacity may require intermittent, or 'bursty', transmission. We show that in the low Signal to Noise Ratio (SNR) regime...
As robots begin to emerge from the cloisters of industrial and military applications and enter the realms of coöperative partners for people, one of the most important facets of human-robot interaction (HRI) will be communication. This can not merely be summarized in terms of the ongoing development into unimodal communication mechanisms such as speech interfaces, which can apply to any technology. Robots will be able to communicate in physically copresent, "faceto-face" interactions across more concurrent modalities than any previous technology. Like many other technologies, these robots will change the way people work and live, yet we must strive to adapt robots to humans, rather than the reverse. This thesis therefore contributes mechanisms for facilitating and influencing human-robot communication, with an explicit focus on the most salient aspect that differentiates robots from other technologies: their bodies. In order to communicate effectively with humans, robots require supportive infrastructure beyond the communications capabilities themselves, much as do the humans themselves. They need to be able to achieve basic common ground with their counterparts in order to ensure that accurate and efficient communication can occur at all.; (cont.) For certain types of higher level communication...
This dissertation studies mechanism design for various combinatorial problems in the presence of strategic agents. A mechanism is an algorithm for allocating a resource among a group of participants, each of which has a privately-known value for any particular allocation. A mechanism is truthful if it is in each participant's best interest to reveal his private information truthfully regardless of the strategies of the other participants. First, we explore a competitive auction framework for truthful mechanism design in the setting of multi-unit auctions, or auctions which sell multiple identical copies of a good. In this framework, the goal is to design a truthful auction whose revenue approximates that of an omniscient auction for any set of bids. We focus on two natural settings - the limited demand setting where bidders desire at most a fixed number of copies and the limited budget setting where bidders can spend at most a fixed amount of money. In the limit demand setting, all prior auctions employed the use of randomization in the computation of the allocation and prices.; (cont.) Randomization in truthful mechanism design is undesirable because, in arguing the truthfulness of the mechanism, we employ an underlying assumption that the bidders trust the random coin flips of the auctioneer. Despite conjectures to the contrary...
This thesis evaluates bit-rate selection techniques to maximize throughput over wireless links that are capable of multiple bit-rates. The key challenges in bit-rate selection are determining which bit-rate provides the most throughput and knowing when to switch to another bit-rate that would provide more throughput. This thesis presents the SampleRate bit-rate selection algorithm. SampleRate sends most data packets at the bit-rate it believes will provide the highest throughput. SampleRate periodically sends a data packet at some other bit-rate in order to update a record of that bit-rate's loss rate. SampleRate switches to a different bit-rate if the throughput estimate based on the other bit-rate's recorded loss rate is higher than the current bit-rate's throughput. Measuring the loss rate of each supported bit-rate would be inefficient because sending packets at lower bit-rates could waste transmission time, and because successive unicast losses are time-consuming for bit-rates that do not work. SampleRate addresses this problem by only sampling at bit-rates whose lossless throughput is better than the current bit-rate's throughput. SampleRate also stops probing at a bit-rate if it experiences several successive losses. This thesis presents measurements from indoor and outdoor wireless networks that demonstrate that SampleRate performs as well or better than other bit-rate selection algorithms.; (cont.) SampleRate performs better than other algorithms on links where all bit-rates suffer from significant loss.; by John C. Bicket.; Thesis (S.M.)--Massachusetts Institute of Technology...
This thesis describes Pastwatch, a distributed version control system. Pastwatch maintains versions of users' shared files. Each version is immutable: to make changes, a user checks out a version onto the user's computer, edits the files locally, then commits the changes to create a new version. The motivation behind Pastwatch is to support wide-area read/write file sharing. An example of this type of sharing is when loosely affiliated programmers from different parts of the world collaborate to work on open-source software projects. To support such users, Pastwatch offers three properties. First, it allows users who travel frequently or whose network connections fail from time to time to access historical versions of the shared files or make new versions while disconnected. Second, Pastwatch makes the current and historical versions of the shared files highly available. For example, even when their office building experiences a power failure, users can still create new versions and retrieve other users' changes from other locations. Supporting disconnected operation is not adequate by itself in these cases; users also want to see others' changes. Third, Pastwatch avoids using dedicated servers. Running a dedicated server requires high administrative costs...
The understanding of molecular cell biology requires insight into the
structure and dynamics of networks that are made up of thousands of interacting
molecules of DNA, RNA, proteins, metabolites, and other components. One of the
central goals of systems biology is the unraveling of the as yet poorly
characterized complex web of interactions among these components. This work is
made harder by the fact that new species and interactions are continuously
discovered in experimental work, necessitating the development of adaptive and
fast algorithms for network construction and updating. Thus, the
"reverse-engineering" of networks from data has emerged as one of the central
concern of systems biology research.
A variety of reverse-engineering methods have been developed, based on tools
from statistics, machine learning, and other mathematical domains. In order to
effectively use these methods, it is essential to develop an understanding of
the fundamental characteristics of these algorithms. With that in mind, this
chapter is dedicated to the reverse-engineering of biological systems.
Specifically, we focus our attention on a particular class of methods for
reverse-engineering, namely those that rely algorithmically upon the so-called
Biomedical taxonomies, thesauri and ontologies in the form of the
International Classification of Diseases (ICD) as a taxonomy or the National
Cancer Institute Thesaurus as an OWL-based ontology, play a critical role in
acquiring, representing and processing information about human health. With
increasing adoption and relevance, biomedical ontologies have also
significantly increased in size. For example, the 11th revision of the ICD,
which is currently under active development by the WHO contains nearly 50,000
classes representing a vast variety of different diseases and causes of death.
This evolution in terms of size was accompanied by an evolution in the way
ontologies are engineered. Because no single individual has the expertise to
develop such large-scale ontologies, ontology-engineering projects have evolved
from small-scale efforts involving just a few domain experts to large-scale
projects that require effective collaboration between dozens or even hundreds
of experts, practitioners and other stakeholders. Understanding how these
stakeholders collaborate will enable us to improve editing environments that
support such collaborations. We uncover how large ontology-engineering
projects, such as the ICD in its 11th revision...
Provided that there is no theoretical frame for complex engineered systems
(CES) as yet, this paper claims that bio-inspired engineering can help provide
such a frame. Within CES bio-inspired systems play a key role. The disclosure
from bio-inspired systems and biological computation has not been sufficiently
worked out, however. Biological computation is to be taken as the processing of
information by living systems that is carried out in polynomial time, i.e.,
efficiently; such processing however is grasped by current science and research
as an intractable problem (for instance, the protein folding problem). A remark
is needed here: P versus NP problems should be well defined and delimited but
biological computation problems are not. The shift from conventional
engineering to bio-inspired engineering needs bring the subject (or problem) of
computability to a new level. Within the frame of computation, so far, the
prevailing paradigm is still the Turing-Church thesis. In other words,
conventional engineering is still ruled by the Church-Turing thesis (CTt).
However, CES is ruled by CTt, too. Contrarily to the above, we shall argue here
that biological computation demands a more careful thinking that leads us
towards hypercomputation. Bio-inspired engineering and CES thereafter...