Pervasive computing is a research field of computing technology that aims to achieve a new
computing paradigm. In this paradigm, the physical environment has a high degree of pervasiveness and
availability of computers and other information technology (IT) devices, usually with communication
capabilities. Pervasive Information Systems (PIS), composed by these kinds of devices, bring issues that
challenge software development for them. Model-Driven Development (MDD), strongly focusing and relying
on models, has the potential to allow: the use of concepts closer to the domain and the reduction of semantic
gaps; higher automation and lower dependency to technological changes; higher capture of expert knowledge
and reuse; an overall increased productivity. Along with the focus and use of models, software development
processes are fundamental to efficient development efforts of successful software systems. For the description
of processes, Software & Systems Process Engineering Meta-Model Specification (SPEM) is the current
standard specification published by the Object Management Group (OMG). This paper presents an extension
to SPEM (version 2.0) Base Plug-In Profile that includes stereotypes needed to support a suitable structural
process organization for MDD approaches aiming to develop software for PIS. A case study is provided to
evaluate the applicability of the extension.
Patient safety is a global challenge that requires knowledge and skills in multiple areas, including human factors and systems engineering. In this chapter, numerous conceptual approaches and methods for analyzing, preventing and mitigating medical errors are described. Given the complexity of healthcare work systems and processes, we emphasize the need for increasing partnerships between the health sciences and human factors and systems engineering to improve patient safety. Those partnerships will be able to develop and implement the system redesigns that are necessary to improve healthcare work systems and processes for patient safety.
This thesis studies material flow control policies for automobile manufacturing systems. Various control policies are implemented in simulations of manufacturing systems to test whether they increase the efficiencies of the systems in terms of specific performance measures of interest. Among the control policies, Control Point Policy (CPP) is deeply studied, because this policy is designed for controlling complex manufacturing system with multiple product types. First, fundamental research in CPP is presented to understand the effects of the parameters on single product type manufacturing systems. Then, multiple product type, assembly-disassembly systems are studied with various control policies, including hybrid policies. Finally, a real automobile manufacturing system case study is presented, and various control policies are experimented on in the simulation model. Because the evaluations of performances are done by simulations, the speed of simulation becomes a very important problem. This thesis therefore presents a new approach to accelerating the speed of simulation.; by Chiwon Kim.; Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2004.; Includes bibliographical references (p. 109-110).; This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Advances in MEMS (micro-electromechanical systems) have enabled some of the "Lab-on-a-Chip" technologies and microfluidics that are pervasive in many of the current developments in analytical chemistry and molecular biology. Coinciding with this effort in micro-analytics has been research in chemical process miniaturization -- reducing the characteristic length scale of the unit operation to improve heat and mass transfer, and ultimately process performance. My research has involved the design and fabrication of novel chemical reaction systems using MEMS and microfabrication methods (photolithography, deep-reactive-ion etching, thin-film growth and deposition, and multiple wafer bonding). Miniature chemical systems provide the opportunity for distributed, on-demand manufacturing, which would eliminate the hazards of transportation and storage of toxic or hazardous chemical intermediates. Reactions that are particularly suitable for miniaturized chemical systems are those that are fast and involve toxic intermediates: the controlled synthesis of phosgene is such a reaction and has been demonstrated in a microfabricated packed bed reactor. Owing to the high surface-to-volume ratios, micro chemical systems also have the potential to make improvements in process performance through enhanced heat and mass transfer.; (cont.) Heterogeneously catalyzed gas-liquid reactions have been performed in the microfabricated reactors and have been shown to have mass transfer coefficients several orders of magnitude larger than their industrial-scale counterparts. Multiphase reactions are often hindered by mass-transfer limitations owing to the difficulty in transporting the gaseous reactant through the liquid to the catalytic surface. The microchemical device has been designed to increase the interfacial gas-liquid contacting area by promoting dispersion and preventing coalescence. Microfabrication allows the design of reactors with complicated fluidic distribution networks...
Communication over interference channels poses challenges not present for the more traditional additive white Gaussian noise (AWGN) channels. In order to approach the information limits of an interference channel, interference mitigation techniques need to be integrated with channel coding and decoding techniques. This thesis develops such practical schemes when the transmitter has no knowledge of the channel. The interference channel model we use is described by r = Hx + w, where r is the received vector, H is an interference matrix, x is the transmitted vector of data symbols chosen from a finite set, and w is a noise vector. The objective at the receiver is to detect the most likely vector x that was transmitted based on knowledge of r, H, and the statistics of w. Communication contexts in which this general integer programming problem appears include the equalization of intersymbol interference (ISI) channels, the cancellation of multiple-access interference (MAI) in code-division multiple-access (CDMA) systems, and the decoding of multiple-input multiple-output (MIMO) systems in fading environments. We begin by introducing mode-interleaved precoding, a transmitter preceding technique that conditions an interference channel so that the pairwise error probability of any two transmit vectors becomes asymptotically equal to the pairwise error probability of the same vectors over an AWGN channel at the same signal-to-noise ratio (SNR). While mode-interleaved precoding dramatically increases the complexity of exact ML detection...
Diversity is a fundamental property of all evolving systems. This thesis examines spatial and temporal patterns of diversity. The systems I will study consist of a population of individuals, each with a potentially unique state, together with a dynamics consisting of copying or reproduction of individual states with small modifications to them (innovations). I show that properties of diversity can be understood by modelling the evolving genealogical tree of the population. This formulation is general enough that it captures interesting features of a range of natural and artificial systems, though I will pay particular attention to genetic diversity in biological populations, and discuss the implications of the results to conservation. I show that diversity is unevenly distributed in populations, and a disproportionate fraction is found in small sub-populations. The evolution of diversity is a dynamic process, and I show that large fluctuations in diversity can result purely from the internal dynamics of the population, and not necessarily from external causes. I also show how diversity is affected by the structure of the population (spatial or well-mixed), and determine the scaling of diversity with habitat area in spatial systems. Predictions from the model agree with existing experimental genetic data on global populations of bacteria. I then apply the method of modelling the genealogical tree of a population to further questions in evolution.; (cont.) Using a generic model of a pathogen evolving to coexist with a population of hosts...
Many real, complex networks have been shown to be scale-free. Scale-free in networks mean that their degree distribution is independent of the network size, have short path lengths and are highly clustered. We identify the qualities of scale-free networks, and discuss the mathematical derivations and numerically simulated outcomes of various deterministic scale-free models. Information Systems networks are a set of individual Information Systems that exchange meaningful data among themselves. However, for various reasons, they do not naturally grow in a scale-free manner. In this topic, we will specifically examine a technique proposed by MITRE that allows information to be exchanged in an efficient manner between Information System nodes. With this technique, we will show that a scale-free Information System Network is sound in theory and practice, state the characteristics of such networks and demonstrate how such a system can be constructed.; by Wee Hong Ang.; Thesis (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, 2006.; Includes bibliographical references (leaves 77-79).
Disaster response operations during recent terrorist attacks and natural disasters have been a cause for concern. Lack of planning is one source of difficulties with these operations, but even if a perfect plan is agreed upon before a disaster occurs, it is unlikely that disaster response operations will be successful without better technological support. For this thesis, three prominent and recent disaster cases are analyzed in order to better understand current disaster response problems that result from insufficient Information Technology (IT) and Intelligent Transportation Systems (ITS) support. After presenting this analysis, we provide results of a technology review, whose goal was to search for emerging technologies that could perform better during a disaster response than the standard, currently available systems. .; (cont.) Using these emerging technologies, a Disaster Response Support System (DRSS) is proposed that would provide improved capability, interoperability, and robustness compared to the currently available support systems. Finally, potential barriers to deployment of a system such as the DRSS are discussed and ways in which these barriers can be overcome are suggested; by Lev Pinelis.; Thesis (S.M.)--Massachusetts Institute of Technology...
Introduction: A physically realizable nonlinear system, like a linear one, is a system whose present output is a function of the past of its input. We may regard the system as a computer that operates on the past of one time function to yield the present value of another time function. Mathematically we say that the system performs a transformation on the past of its input to yield its present output. When this transformation is linear (the case of linear systems) we can take advantage of the familiar convolution integral to obtain the present output from the past of the input and the system is said to be characterized by its response to an impulse. That is, the response of a linear system to an impulse is sufficient to determine its response to any input. When the transformation is nonlinear we no longer have a simple relation like the convolution integral relating the output to the past of the input and the system can no longer be characterized by its response to an impulse since superposition does not apply. Wiener has shown, however, that we can characterize a nonlinear system by a set of coefficients and that these coefficients can be determined from a knowledge of the response of the system to shot noise excitation. Thus, shot noise occupies the same position as a probe for investigating nonlinear systems that the impulse occupies as a probe for investigating linear systems. The first section of this thesis is devoted to the Wiener theory of nonlinear system characterization. Emphasis is placed on important concepts of this theory that are used in succeeding chapters to develop a theory for determining optimum nonlinear systems.; by Amar Gopal Bose.; Thesis (Sc. D.)--Massachusetts Institute of Technology...
Molecular self-assembly describes the assembly of molecular components into complex, supramolecular structures governed by weak, non-covalent interactions. In recent years, molecular self-assembly has been used extensively as a means of creating materials and devices with well-controlled, nanometer-scale architectural features. In this thesis, molecular self-assembly is used as a tool for the fabrication of both gene and drug delivery systems which, by virtue of their well-controlled architectural features, possess advantageous properties relative to traditional materials used in these applications. The first part of this thesis describes the solution-phase self-assembly of a new family of linear-dendritic "hybrid" polymers with plasmid DNA for applications in gene therapy. It begins with an overview of the design of next-generation, non-viral gene delivery systems and continues through the synthesis and validation of hybrid polymer systems, which possess modular functionalities for DNA binding, endosomal escape, steric stabilization, and tissue targeting. This part of the thesis concludes with applications of these systems to two areas of clinical interest: DNA vaccination and tumor targeted gene therapy.; (cont.) The second part of this thesis describes the directed self-assembly of polymeric thin films which are capable of degrading in response to either passive or active stimuli to release their contents. It begins with a description of passive release thin films which degrade by basic hydrolysis to release precise quantities of model drug compounds. These systems can be engineered to release their contents on time scales ranging from hours to weeks and can also be designed to release multiple drugs either in series or in parallel. Later...
A machine learning framework is presented that supports data mining and statistical modeling of systems that are monitored by large-scale sensor networks. The proposed algorithm is novel in that it takes both observations and domain knowledge into consideration and provides a mechanism that combines analytical modeling and inductive learning. An efficient solver is presented that allow the algorithm to solve large-scale problems efficiently. The solver uses a randomized kernel that incorporates domain knowledge into support vector machine learning. It also takes advantage of the sparseness of support vectors and this allows for parallelization and online training to further speed-up of the computation. The solver can be integrated into existing systems, embedded into databases, or exposed as a web service. Understanding the data generated by large-scale system presents several problems. First, statistical modeling approaches may either under-fit or over-fit the data and are sensitive to data quality. Second, learning is a computational extensive process and often becomes intractable when the sample size exceeds several thousands.; (cont.) Third, learning algorithms need to be tuned to the specific problem in most engineering and business fields. Last but not least...
Large-scale simulations of solvated molecules that treat the solvent explicitly are very computationally expensive, and as a result work has been done on modifying the potentials to treat solvent implicitly. Implicit solvation is well-known in Brownian Dynamics of dilute solutions, but offers promise to speed up many other types of molecular simulations as well, including studies of proteins and colloids where the local density can vary considerably. This work examines implicit solvent potentials within a more general coarse-graining framework. While a pairwise potential between solute sites is relatively simple and ubiquitous, an additional parameterization based on the local solute concentration has the possibility to increase the accuracy of the simulations with only a marginal increase in computational cost. In this thesis we describe a method in which the radial distribution function (RDF) and excess chemical potential of solute insertion ([mu]ex) for a system of Lennard-Jones particles are first measured in a fully explicit, all-particle simulation, and then reproduced across a range of solute particle densities in an implicit solvent simulation. The resulting potentials are densitydependent, implicit solvent (DDIS) potentials. We then test the transferability of DDIS potentials to mixtures and systems of chains without additional optimization. We find that RDF transferability to mixtures is very good and RDF errors in systems of chains increase linearly with chain length. Excess chemical potential transferability is good for mixtures at low solute concentration...
This thesis focused on advancing the microchemical field from single device based demonstrations to systems that can perform multi-step series and parallel synthesis. Few examples of micro-separators and micro-pumps suited for miniaturized lab-on-a-chip systems for organic syntheses exist, so the first half of this thesis developed systems for these micro-unit-operations, while the second half demonstrated multistep microchemical operations enabled by these systems. In-line continuous separation devices are developed that enabled removal of unreacted reagents/byproducts, making it possible to realize a series of reactions without leaving the microreactor environment. Differences in surface forces and preferential wettability characteristics of fluoropolymers are used for phase separation. Such microseparators are used to demonstrate 100% separation of two phase flows of hexane and water, toluene and water, dichloromethane and water, and hexane and methanol. Integrated liquid-liquid extraction devices are microfabricated that performed two -phase contacting by segmented flow, followed by separation - resulting in single stage extraction. Single stage extraction of N,N-dimethylformamide from dichloromethane to water, and from diethyl ether to water is demonstrated. The development of separators allows microreactors to be connected to microseparators to form microreactor networks enabling reactions and separations in succession. The starting reagents are loaded in syringes and syringe pumps push fluid through the train of microdevices. However...
Requirements engineering and software architecture are quite mature software engineering sub-disciplines, which often seem to be disconnected for many reasons and it is difficult to perceive the impact of functional and non-functional requirements on architecture and to establish appropriate trace links for traceability purposes. In other cases, the estimation of how non-functional requirements, as the quality properties a system should pose, is not perceived useful enough to produce high-quality software. Therefore, in this special issue, we want to highlight the importance and the role of quality requirements for architecting and building complex software systems that in many cases require multidisciplinary engineering techniques, which increases the complexity of the software development process.; Rafael Capilla, Muhammad Ali Babar, Oscar Pastor
Systems Engineering Project Report; The search, detection, identification and assessment components of the U.S. Navys organic modular in-stride Mine Countermeasure (MCM) Concept of Operations (CONOPS) have been evaluated for their effectiveness as part of a hypothetical exercise in response to the existence of sea mines placed in the sea lanes of the Strait of Hormuz. The current MCM CONOPS has been shown to be capable of supporting the mine search and detection effort component allocation needs by utilizing two Airborne Mine Countermeasure (AMCM) deployed systems. This adequacy assessment is tenuous. The CONOPS relies heavily upon the Sikorsky MH- 60/S as the sole platform from which the systems operate. This reliance is further compounded by the fact both AMCM systems are not simultaneously compatible on board the MH-60/S. As such, resource availability will challenge the MCM CONOPS as well as the other missions for which the MH-60/S is intended. Additionally, the AMCM CONOPS systems are dependent upon the presence of warfighters in the helicopters above the minefield and as integral participants in the efforts to identify sea mines and to assess their threat level. Model Based System Engineering (MBSE) techniques have been combined with research and stakeholder inputs in an analysis that supports these assertions.m
This thesis examines the use of Non-intrusive Load Monitoring (NILM) in auxiliary shipboard systems, such as a low pressure air system, to determine the state of equipment in larger connected systems, such as the main propulsion engines. Using data collected on previously installed NILM's at the Naval Surface Warfare Center, Philadelphia DDG-51 Land Based Engineering Site (LBES), major event changes were analyzed and diagnosed using power data collected from the in-service low pressure air compressor (LPAC) and the in-service fuel oil pump. Events investigated include main propulsion engine starts and loadings, gas turbine generators starts, major electrical load shifts, and leak insertions into the low pressure air system. An additional NILM was installed on the General Electric LM2500 Universal Engine Controller (UEC) in order to assist in the diagnosis of various state changes. The UEC provides the appropriate interfaces to monitor and control each LM2500 GTM. The UEC controls the application of starter air, ignition power, and fuel to the engine while also receiving feedback of engine parameters from sensors on the engine. Using the combined data received by the LPAC, fuel oil pump, and UEC, a diagnosis system is derived that can detect major events in the engineering plant described above.; by Thomas Duncan McKay.; Thesis (Nav. E. and S.M.)--Massachusetts Institute of Technology...
This thesis explores a variety of educational feedback systems with an emphasis on developing them for in-class demonstrations and in-depth student projects. The nature of feedback systems means there is never a shortage of demonstrations or assignments that can truly capture the students' imagination and enthusiasm for class material. Unfortunately, it is sometimes the case that the feedback systems with the most potential for greatness are also unreliable, inaccurate, and inconsistent. This thesis attempts to narrow the gap by exploring, analyzing, and building a variety of exciting feedback systems. A comparison of general-purpose and high-performance operational amplifiers is created. Hardware for a web-based laboratory on canonical second-order systems is implemented. Cheap magnetic levitation kits for in-term projects are made even cheaper. And finally, the inverted pendulum - a decades-old Course VI heirloom and featured demonstration - is restored to its past glory.; by Isaac Dancy.; Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.; Includes bibliographical references (p. 85-86).
During the past two decades, Intelligent Transportation Systems (ITS) have provided transportation organizations with increasingly advanced tools both to operate and manage systems in real-time. At the same time, federal legislation has been tightening the linkages between state and local transportation investments and metropolitan air quality goals. In this context, ITS seems to represent a case of the potential synergies - or so-called "win-win" outcomes - that could be realized for the dual policy goals of air quality and mobility. If the various public sector organizations responsible for air quality and transportation could cooperate in deploying, assessing and further adapting these new technologies to take advantage of these synergies, they could achieve a "sustainable use" of ITS. However, looking beyond ITS and air quality, these issues point to broader questions of how to appropriately manage technology and its impacts on society, specifically those technologies deployed by the public sector. In particular, how does the public sector innovate and deploy technologies in ways that maximize the benefits, and minimize or avoid the negative impacts? In order to examine this phenomenon, this thesis takes the example of ITS and air quality to develop and test a broader framework of Integrated Innovation...
Self adaptation has been proposed to overcome the complexity of today's
software systems which results from the uncertainty issue. Aspects of
uncertainty include changing systems goals, changing resource availability and
dynamic operating conditions. Feedback control loops have been recognized as
vital elements for engineering self-adaptive systems. However, despite their
importance, there is still a lack of systematic way of the design of the
interactions between the different components comprising one particular
feedback control loop as well as the interactions between components from
different control loops . Most existing approaches are either domain specific
or too abstract to be useful. In addition, the issue of multiple control loops
is often neglected and consequently self adaptive systems are often designed
around a single loop. In this paper we propose a set of design patterns for
modeling and designing self adaptive software systems based on MAPE-K Control
loop of IBM architecture blueprint which takes into account the multiple
control loops issue. A case study is presented to illustrate the applicability
of the proposed design patterns.; Comment: 18 pages, 11 figures
One of the Civil criteria in ABET TC2K is that programs “apply current knowledge and adapt to emerging applications of technology,” such as, changes in the building codes. Structural design firms also have the expectation that Civil Engineering Technology (CET) graduates should be able to apply current codes to determine structural loads, required for the analysis and design of structures. While most CET programs expose their students to structural analysis and design using instructor or textbook-prescribed loads, few expose students to the detailed calculation of actual structural loads using current codes.
Many jurisdictions in the United States, including New York State, have recently adopted the International Building Code (IBC), which contains the latest provisions on structural loads, including wind and seismic loads for buildings. Prior to 2002, in New York State, the building code only required buildings to be designed for wind loads using a very simple tabular method. Seismic loads were not considered. Under that dispensation, it was possible and feasible to integrate the topic of structural loads, dead, live, snow and wind load, into any one of the structural design courses, all-be-it at an elementary level. However...