We strain our eyes, cramp our necks, and destroy our hands trying to interact with computer on their terms. At the extreme, we strap on devices and weigh ourselves down with cables trying to re-create a sense of place inside the machine, while cutting ourselves off from the world and people around us. The alternative is to make the real environment responsive to our actions. It is not enough for environments to respond simply to the presence of people or objects: they must also be aware of the subtleties of changing situations. If all the spaces we inhabit are to be responsive, they must not require encumbering devices to be worn and they must be adaptive to changes in the environment and changes of context. This dissertation examines a body of sophisticated perceptual mechanisms developed in response to these needs as well as a selection of human-computer interface sketches designed to push the technology forward and explore the possibilities of this novel interface idiom. Specifically, the formulation of a fully recursive framework for computer vision called DYNA that improves performance of human motion tracking will be examined in depth. The improvement in tracking performance is accomplished with the combination of a three-dimensional...
Search engines have evolved from simple text indexing to indexing other forms of media, such as audio and video. I have designed and implemented a web-based system that permits people to search the transcripts of selected Supreme Court cases, and retrieve audio file clips relevant to the search terms. The system development compared two implementation approaches, one based on transcript aligning technologies developed by Hewlett-Packard, the other is a servlet-based search system designed to return pre-parsed audio file clips. While the first approach has the potential to revolutionize audio content search, it could not consistently deliver successively parsed audio file clips with the same user friendly content and speed as the simpler second approach. This web service, implemented with the second approach, is currently deployed and publicly available at www.supremecourtaudio.net .; by Edward M. Wang.; Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.; Includes bibliographical references (leaves 73-74).
This thesis describes the design, implementation, and execution of the Linear Road benchmark for stream-based data management systems. The motivation for benchmarking and the selection of the benchmark application are described. Test harness implementation is discussed, as are experiences using the benchmark to evaluate the Aurora engine. Effects of this work on the evolution of the Aurora engine are also discussed. Streams consist of continuous feeds of data from external data sources such as sensor networks or other monitoring systems. Stream data management systems execute continuous and historical queries over these streams, producing query results in real-time. This benchmark provides a means of comparing the functionality and performance of stream-based data management systems relative to each other and to relational systems. The benchmark presented is motivated by the increasing prevalence of "variable tolling" on highway systems throughout the world. Variable tolling uses dynamically determined factors such as congestion levels and accident proximity to calculate tolls. Linear Road specifies a variable tolling system for a fictional urban area, including such features as accident detection and alerts, traffic congestion measurements...
The IOA simulator is a tool that has been developed in the Theory of Distributed Systems group at MIT. This tool simulates the execution of automata described by the IOA language. It generates logs of execution traces and provides other pertinent information regarding the execution, such as the validity of specified invariants. Although the simulator supports paired simulation of two automata for the purpose of checking simulation relations, one of its limitations is its lack of support for the simulation of composite automata. A composite automaton represents a complex system and is made up of other automata, each representing a system component. This thesis concerns the addition of a capability to simulate composite automata in a manner that allows observing and debugging the individual system component automata. While there is work in progress on creating a tool that will translate a composite definition into a single automaton, the added ability to simulate composite automata directly will add modularity and simplicity, as well as ease of observing the behavior of individual components for the purpose of distributed debugging.; by Edward Solovey.; Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science...
(cont.) We show upper and lower bounds for the general problem and for specific partial orders. A few of our intermediate results are of independent interest. 1. If strings with a property form a vector space, adaptive 2-sided error tests for the property have no more power than non-adaptive 1-sided error tests. 2. Random LDPC codes with linear distance and constant rate are not locally testable. 3. There exist graphs with many edge-disjoint induced matchings of linear size. In the final part of the thesis, we initiate an investigation of property testing as applied to images. We study visual properties of discretized images represented by n x n matrices of binary pixel values. We obtain algorithms with the number of queries independent of n for several basic properties: being a half-plane, connectedness and convexity.; Property testers are algorithms that distinguish inputs with a given property from those that are far from satisfying the property. Far means that many characters of the input must be changed before the property arises in it. Property testing was introduced by Rubinfeld and Sudan in the context of linearity testing and first studied in a variety of other contexts by Goldreich, Goldwasser and Ron. The query complexity of a property tester is the number of input characters it reads. This thesis is a detailed investigation of properties that are and are not testable with sublinear query complexity. We begin by characterizing properties of strings over the binary alphabet in terms of their formula complexity. Every such property can be represented by a CNF formula. We show that properties of n-bit strings defined by 2CNF formulas are testable with O([square root of]n) queries...
We develop a method for learning patterns from a set of positive examples to retrieve semantic content from tree-structured data. Specifically, we focus on HTML documents on the World Wide Web, which contain a wealth of semantic information and have a useful underlying tree structure. A user provides examples of relevant data they wish to extract from a web site through a simple user interface in a web browser. To construct patterns, we use the notion of the edit distance between the subtrees represented by these examples to distill them into a more general pattern. This pattern may then be used to retrieve other instances of the selected data from the same page or other similar pages. By linking patterns and their components with semantic labels using RDF, we can create semantic "overlays" for Web information which are useful in such projects as the Semantic Web and the Haystack information management environment.; by Andrew William Hogue.; Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.; Includes bibliographical references (p. 103-106).
This thesis focuses on the marriage of magnetic-flux-sensing feedback and boundary-mode operation in a flyback converter to create a simple, small, low-cost, isolated, and tightly regulated power supply. Although each technique has been implemented before, the marriage of these two concepts is new. The union of these two techniques is powerful in terms of simplifying the overall design. The same signal, the flyback pulse on the bias winding, controls the feedback loop and the turn-on of the switch. In the process of building an isolated power supply, a complete understanding of the benefits and disadvantages of various operational modes along with other design options are explored. The flyback converter was built using discrete parts including op-amps, comparators, and other analog building blocks. The goal was to create a proof of concept board to test the overall effectiveness of the new topology in a simple, quick manner.; by Mayur V. Kenia.; Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.; Includes bibliographical references (leaves 108-109).
(cont.) down implicit assumptions of altruism while showing the resulting negative impact on utility. From a selfish equilibrium, with much lower global utility, we show the ability of our algorithm to reorganize and restore the utility of individual nodes, and the system as a whole, to similar levels as realized in the SuperPeer network. Simulation of our algorithm shows that it reaches the predicted optimal utility while providing fairness not realized in other systems. Further analysis includes an epsilon equilibrium model where we attempt to more accurately represent the actual reward function of nodes. We find that by employing such a model, over 60% of the nodes are connected. In addition, this model converges to a utility 34% greater than achieved in the SuperPeer network while making no assumptions on the benevolence of nodes or centralized organization.; This thesis proposes a reorganization algorithm, based on the region abstraction, to exploit the natural structure in overlays that stems from common interests. Nodes selfishly adapt their connectivity within the overlay in a distributed fashion such that the topology evolves to clusters of users with shared interests. Our architecture leverages the inherent heterogeneity of users and places within the system their incentives and ability to affect the network. As such...
The Airborne Seeker Test Bed (ASTB) is an airborne sensor testing platform operated by the Tactical Defense Systems group at MIT Lincoln Laboratory. The Instrumentation Head (IH) is a primary sensor on the ASTB. It is a passive X-band radar receiver located on the nose of the plane. The IH serves as a truth sensor for other RF systems on the test bed and is controlled by an onboard tracking system, the Seeker Computer. The Seeker Computer processes IH data in real-time to track targets in Doppler, angle, and range. From these tracks it then produces angle-error feedback signals that command the IH gimbals, keeping targets centered along the antenna boresight. Over three years, a new Seeker Computer was built to replace an old system constrained by obsolete hardware. The redevelopment project was a team effort and this thesis presents a systems-level analysis of the design process, the new Seeker Computer system, and the related team and individual contributions to software and digital signal processing research that took place during development.; by Yue Hann Chin.; Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.; This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Includes bibliographical references (leaf 65).
Human beings exhibit rapid learning when presented with a small number of images of a new object. A person can identify an object under a wide variety of visual conditions after having seen only a single example of that object. This ability can be partly explained by the application of previously learned statistical knowledge to a new setting. This thesis presents an approach to acquiring knowledge in one setting and using it in another. Specifically, we develop probability densities over common image changes. Given a single image of a new object and a model of change learned from a different object, we form a model of the new object that can be used for synthesis, classification, and other visual tasks. We start by modeling spatial changes. We develop a framework for learning statistical knowledge of spatial transformations in one task and using that knowledge in a new task. By sharing a probability density over spatial transformations learned from a sample of handwritten letters, we develop a handwritten digit classifier that achieves 88.6% accuracy using only a single hand-picked training example from each class. The classification scheme includes a new algorithm, congealing, for the joint alignment of a set of images using an entropy minimization criterion. We investigate properties of this algorithm and compare it to other methods of addressing spatial variability in images. We illustrate its application to binary images...
We develop efficient techniques for the non-rigid registration of medical images by using representations that adapt to the anatomy found in such images. Images of anatomical structures typically have uniform intensity interiors and smooth boundaries. We create methods to represent such regions compactly using tetrahedra. Unlike voxel-based representations, tetrahedra can accurately describe the expected smooth surfaces of medical objects. Furthermore, the interior of such objects can be represented using a small number of tetrahedra. Rather than describing a medical object using tens of thousands of voxels, our representations generally contain only a few thousand elements. Tetrahedra facilitate the creation of efficient non-rigid registration algorithms based on finite element methods (FEM). We create a fast, FEM-based method to non-rigidly register segmented anatomical structures from two subjects. Using our compact tetrahedral representations, this method generally requires less than one minute of processing time on a desktop PC. We also create a novel method for the non-rigid registration of gray scale images. To facilitate a fast method, we create a tetrahedral representation of a displacement field that automatically adapts to both the anatomy in an image and to the displacement field. The resulting algorithm has a computational cost that is dominated by the number of nodes in the mesh (about 10...
There have been a number of recent proposals for link and network-layer protocols in the sensor networking literature, each of which claims to be superior to other approaches. However, a proposal for a networking protocol at a given layer in the stack is typically evaluated in the context of a single set of carefully selected protocols at other layers, as well as a particular network topology and application workload. Because of the limited data available about interactions between different protocols at various layers of the stack, it is difficult for developers of sensor network applications to select from amongst the range of alternative sensor networking protocols. This thesis attempts to remedy this situation by evaluating the interaction between several protocols at the MAC and network layers and measuring their performance in terms of end-to-end throughput and loss on a large, real-world TinyOS and Mica2 mote-based tested. We report on different combinations of protocols using different application workloads and power-management schemes. This thesis analyzes the effects of various services provided by the different protocols, such as link-level retransmission, neighborhood management, and link-quality estimation. Our analysis suggests some common sources of poor performance that developers may experience during real-life deployments; based on this experience...
One of the most critical challenges in nanoscale science and engineering is to make functional 3D nanodevices with high-accuracy. While considerable progress has been made in the "bottom-up" approach, the lithographic "top-down" approach remains the only way to encode human engineering effort, and to meet the optimal theoretical designs. Probably the most prominent example of lithographic fabrication is semiconductor manufacturing. However such manufacturing, aside from being extremely expensive, is highly inflexible, virtually excluding any work other than silicon microelectronic devices. Meanwhile, the miniaturization and integration of optical devices can potentially revolutionize the field of optics, with an impact that may prove comparable to the transition of electronics from vacuum tubes to transistors. To achieve high-level functionalities and to meet the stringent tolerance in optical information processing, multilayered structures with both minimum feature sizes and layer-to-layer overlay accuracy down to a few of nanometers are required, thus posing significant challenges in fabrication. Some requirements, such as nanometer-level spatial coherence, are beyond the capability of current semiconductor manufacturing.; (cont.) As part of an effort to develop a low-cost...
Over the last decade, researchers and practitioners have increasingly come to acknowledge that the introduction of security into software systems – especially complex, distributed systems – should proceed by means of a structured, systematic approach, combining principles from both software and security engineering. Such systematic approaches, particularly those implying some sort of process aligned with the development life-cycle, are termed security methodologies. While there are numerous methodologies in the literature, each with its own peculiar advantages and disadvantages, making it more or less suitable for a given set of project situations, none can lay claim to being universal, i.e. able to take into account all system-specific attributes, all technologies, all skill levels, and – in general – to be applicable to all project situations. In other words, the literature does not currently present developers with an “ideal” methodology (in an absolute sense); and, indeed, such a requirement would be infeasible, since “ideal” must necessarily be interpreted with respect to a given situation – encompassing system types, technologies, skillsets and whatever other qualities are seen as desirable. The problem facing the area is thus not so much the construction of “bigger and better” methodologies with novel or interesting features – i.e. (unattainably) ideal methodologies in an absolute sense – but the construction of (attainably) ideal methodologies for particular project situations. This thesis proposes a comprehensive solution to the latter problem by developing a conceptual “toolkit” for engineering security methodologies...
Computer systems are designed and used by humans. And human being is characterized, among other things, by emotions. Giving this fact, the process of designing and developing computer systems is, like any other facet in our lives, driven by emotions.
Requirements engineering is one of the main phases in software development. In Requirements engineering, several tasks include acceptance and negotiation activities in which the emotional factor represents a key role. This paper presents a study based on the application of affect grid by Russell in requirements engineering main stakeholders: developers and users. Results show that high arousal and low pleasure levels in the process are predictors of conflictive requirements.
Computer systems are designed and used by humans. And human being is characterized, among other things, by emotions. Giving this fact, the process of designing and developing computer systems is, like any other facet in our lives, driven by emotions. Requirements engineering is one of the main phases in software development. In Requirements engineering, several tasks include acceptance and negotiation activities in which the emotional factor represents a key role. This paper presents a study based on the application of affect grid by Russell in requirements engineering main stakeholders: developers and users. Results show that high arousal and low pleasure levels in the process are predictors of conflictive requirements.
One of the reasons linear motors, a technology nearly a century old, have not been adopted for a large number of linear motion applications is that they have historically had poor efficiencies. This has restricted the progress of linear motor development. The concept of a linear motor as a rotary motor cut and laid out flat with a conventional rotary motor control scheme as a design basis may not be the best way to design and control a high-speed linear motor. End effects and other geometry subtleties of a linear motor make it unique, and a means of optimizing efficiency with both the motor geometry and the motor control scheme will be analyzed to create a High-Speed Linear Induction Motor (LIM) with a higher efficiency than what is possible with conventional motors and controls. This thesis pursues the modeling of a short secondary type Double-Sided Linear Induction Motor (DSLIM) that is proposed for use as an Electromagnetic Aircraft Launch System (EMALS) aboard the CVN-2 1. Mathematical models for the prediction of effects that are peculiar to DSLIM are formulated, and their overall effects on the performance of the proposed machine are analyzed.; (cont.) These effects are used to generate a transient motor model, which is then driven by a motor controller that is specifically designed to the characteristics of the proposed DSLIM. Due to this DSLIM's role as a linear accelerator...
This course provides students with an opportunity to conceive, design and implement a product, using rapid protyping methods and computer-aid tools. The first of two phases challenges each student team to meet a set of design requirements and constraints for a structural component. A course of iteration, fabrication, and validation completes this manual design cycle. During the second phase, each team conducts design optimization using structural analysis software, with their phase one prototype as a baseline. Acknowledgments This course is made possible thanks to a grant by the alumni sponsored Teaching and Education Enhancement Program (Class of '51 Fund for Excellence in Education, Class of '55 Fund for Excellence in Teaching, Class of '72 Fund for Educational Innovation). We gratefully acknowledge the financial support.The course was approved by the Undergraduate Committee of the MIT Department of Aeronautics and Astronautics in 2003. We thank Prof. Manuel Martinez-Sanchez and the committee members for their support and suggestions.
Open data is an emerging paradigm to share large and diverse datasets --
primarily from governmental agencies, but also from other organizations -- with
the goal to enable the exploitation of the data for societal, academic, and
commercial gains. There are now already many datasets available with diverse
characteristics in terms of size, encoding and structure. These datasets are
often created and maintained in an ad-hoc manner. Thus, open data poses many
challenges and there is a need for effective tools and techniques to manage and
maintain it. In this paper we argue that software maintenance and reverse
engineering have an opportunity to contribute to open data and to shape its
future development. From the perspective of reverse engineering research, open
data is a new artifact that serves as input for reverse engineering techniques
and processes. Specific challenges of open data are document scraping, image
processing, and structure/schema recognition. From the perspective of
maintenance research, maintenance has to accommodate changes of open data
sources by third-party providers, traceability of data transformation
pipelines, and quality assurance of data and transformations. We believe that
the increasing importance of open data and the research challenges that it
brings with it may possibly lead to the emergence of new research streams for
reverse engineering as well as for maintenance.; Comment: 7 pages...
CJM is a mechanized approach to problem solving in an enterprise. Its basis is
intercommunication between information systems, in order to provide faster and more effective
decision making process. These results help minimize human error, improve overall productivity
and guarantee customer satisfaction.
Most enterprises or corporations started implementing integration by adopting automated
solutions in a particular process, department, or area, in isolation from the rest of the physical or
intelligent process resulting in the incapability for systems and equipment to share information
with each other and with other computer systems. The goal in a manufacturing environment is to
have a set of systems that will interact seamlessly with each other within a heterogeneous object
framework overcoming the many barriers (language, platforms, and even physical location) that
do not grant information sharing.
This study identifies the data needs of several information systems of a corporation and proposes
a conceptual model to improve the information sharing process and thus Computer Integrated
The architecture proposed in this work provides a methodology for data storage, data retrieval,
and data processing in order to provide integration at the enterprise level. There are four layers of
interaction in the proposed IXA architecture. The name TXA (DDL - XML Architecture for
Enterprise Integration) is derived from the standards and technologies used to define the layers
and corresponding functions of each layer. The first layer addresses the systems and applications
responsible for data manipulation. The second layer provides the interface definitions to facilitate
the interaction between the applications on the first layer. The third layer is where data would be
structured using XML to be stored and the fourth layer is a central repository and its database