SKAN: Skin Scanner - System for Skin Cancer Detection Using Adaptive Techniques - combines computer engineering concepts with areas like dermatology and oncology. Its objective is to discern images of skin cancer, specifically melanoma, from others that show only common spots or other types of skin diseases, using image recognition. This work makes use of the ABCDE visual rule, which is often used by dermatologists for melanoma identification, to define which characteristics are analyzed by the software. It then applies various algorithms and techniques, including an ellipse-fitting algorithm, to extract and measure these characteristics and decide whether the spot is a melanoma or not. The achieved results are presented with special focus on the adaptive decision-making and its effect on the diagnosis. Finally, other applications of the software and its algorithms are presented.
A significant factor driving the development of power conversion technology is the need to increase performance while reducing size and improving efficiency. In addition, there is a desire to increase the level of integration of DC-DC converters in order to take advantage of the cost and other benefits of batch fabrication techniques. While advances in the power density and integration of DC-DC converters have been realized through development of better active device technologies, much room for improvement remains in the size and fabrication of passive components. To achieve these improvements, a substantial increase in operating frequency is needed, since intermediate energy storage requirements are inversely proportional to frequency. Unfortunately, traditional power conversion techniques are ill-suited to handle this dramatic escalation of switching frequency. New architectures have been proposed which promise to deliver radical performance improvements while potentially reaching microwave frequencies. These new architectures promise to enable substantial miniaturization of DC-DC converters and to permit much a higher degree of integration. The principal effort of this thesis is the development of design and characterization methods for rectifier topologies amenable to use in the new architectures. A computational design approach allowing fast and accurate circuit analysis and synthesis is developed and applied...
The determination of molecular structures is of growing importance in modern chemistry and biology. This thesis presents two practical, systematic algorithms for two structure determination problems. Both algorithms are branch-and-bound techniques adapted to their respective domains. The first problem is the determination of structures of multimers given rigid monomer structures and (potentially ambiguous) intermolecular distance measurements. In other words, we need to find the the transformations to produce the packing interfaces. A substantial difficulty results from ambiguities in assigning intermolecular distance measurements (from NMR, for example) to particular intermolecular interfaces in the structure. We present a rapid and efficient method to simultaneously solve the packing and the assignment problems. The algorithm, AmbiPack, uses a hierarchical division of the search space and the branch-and-bound algorithm to eliminate infeasible regions of the space and focus on the remaining space. The algorithm presented is guaranteed to find all solutions to a pre-determined resolution. The second problem is building a protein model from the initial three dimensional electron density distribution (density map) from X-ray crystallography. This problem is computationally challenging because proteins are extremely flexible.; (cont.) Our algorithm...
This thesis explores approaches to the problem of spoken document retrieval (SDR), which is the task of automatically indexing and then retrieving relevant items from a large collection of recorded speech messages in response to a user specified natural language text query. We investigate the use of subword unit representations for SDR as an alternative to words generated by either keyword spotting or continuous speech recognition. Our investigation is motivated by the observation that word-based retrieval approaches face the problem of either having to know the keywords to search for [em a priori], or requiring a very large recognition vocabulary in order to cover the contents of growing and diverse message collections. The use of subword units in the recognizer constrains the size of the vocabulary needed to cover the language; and the use of subword units as indexing terms allows for the detection of new user-specified query terms during retrieval. Four research issues are addressed. First, what are suitable subword units and how well can they perform? Second, how can these units be reliably extracted from the speech signal? Third, what is the behavior of the subword units when there are speech recognition errors and how well do they perform? And fourth...
Many optimization problems arising in various applications require minimization of an objective cost function that is convex but not differentiable. Such a minimization arises, for example, in model construction, system identification, neural networks, pattern classification, and various assignment, scheduling, and allocation problems. To solve convex but not differentiable problems, we have to employ special methods that can work in the absence of differentiability, while taking the advantage of convexity and possibly other special structures that our minimization problem may possess. In this thesis, we propose and analyze some new methods that can solve convex (not necessarily differentiable) problems. In particular, we consider two classes of methods: incremental and variable metric.; by Angelia NediÄ.; Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.; Includes bibliographical references (p. 169-174).; This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
This thesis involves an extensive experimental and theoretical study of the thermoelectric-related transport properties of BilxSbx nanowires, and presents a theoretical framework for predicting the electrical properties of superlattice nanowires. A template-assisted fabrication scheme is employed to synthesize Bi-based nanowires by pressure injecting liquid metal alloys into the hexagonally packed cylindrical pores of anodic alumina. These nanowires possess a very high crystalline quality with a diameter-dependent crystallographic orientation along the wire axis. A theoretical model for Bil-Sbx nanowires is developed, taking into consideration the effects of cylindrical wire boundary, multiple and anisotropic carrier pockets, and non-parabolic dispersion relations. A unique semimetal-semiconductor (SM-SC) transition is predicted for these nanowires as the wire diameter decreases or as the Sb concentration increases. Also, an unusual physical phenomenon involving a very high hole density of states due to the coalescence of 10 hole carrier pockets, which is especially advantageous for improving the thermoelectric performance of p-type materials, is uncovered for BilxSbx nanowires. Various transport measurements are reported for Bi-related nanowire arrays as a function of temperature...
The optical absorption of bismuth nanowires in the energy (wavenumber) range of 600 - 4000cm-1 is studied. Optical reflection and transmission spectra reveal that bismuth nanowires have a large and intense absorption peak as well as several smaller absorption peaks which are not measured in bulk bismuth. The smaller absorption peaks fit reasonably well to theoretical models for intersubband absorption in bismuth nanowires. The wire diameter, polarization, and doping dependencies as well as the spectral shape of the dominant peak agree with simulations of the optical absorption resulting from an L-point valence to T-point valence band electronic transition. The large absorption peak is present even for nanowires too large to exhibit quantum confinement, thus showing that the absorption results from a surface-induced effect and not from quantum confinement. The enhanced optical absorption in nanowires over bulk bismuth is attributed to a surface term in the matrix element which results from the spacial gradient of the dielectric function and the large dielectric mismatch between bismuth and the surrounding alumina or air. A comparison of the measured spectra with simulations of optical absorption resulting from direct L-point electronic transitions demonstrated that this absorption mechanism is not dominant in our materials. In order to explore the optical properties of bismuth nanowires...
MIT is currently developing a web-based service for the large-scale assessment of student writing (iMOAT.net). This service contains a database of useful data, particularly texts of student essays that should be available for research and collaboration purposes. In this thesis, I propose a high level design for an interface to the iMOAT system called FREiMOAT that will control access to this research data. This information has the potential to be used by both independent researchers as well as current users of the iMOAT system for self evaluation and collaboration purposes. Current users of the iMOAT system (administrators at a number of schools around the country), have requested the ability to view each others materials so they might improve upon their own assessments. (e.g. SMALL UNIVERSITY would like to see how STATE COLLEGE is able to use the service on larger bodies of students) Independent researchers, on the other hand, might want access to the site for purposes of determining if students from different states perform differently on the same assessments. This interface is responsible for two main tasks; access control and maintaining data anonymity.; by Jordan Michael Alperin.; Thesis (M. Eng.)--Massachusetts Institute of Technology...
The goal of this work is to make an (n,m) assignment for individual suspended single wall carbon nanotubes (SWNTs) based on the measurements of their Raman Radial Breathing Modes and electron transition energies E[sub]ii based on Raman spectroscopy. The suspended SWNTs are grown on a photolithographically defined electrode pattern, which is designed so that suspended SWNTs are grown at known locations with known directions. The suspended SWNTs are then characterized by atomic force microscopy (AFM), scanning electron microscopy (SEM), and Raman spectroscopy. Finally, the information on the diameter distribution and the energy of the electronic transitions of the resonant suspended SWNTs obtained from Raman spectroscopy is compared to other published works to make (n,m) assignments of a number of suspended SWNTs.; by Hyungbin Son.; Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.; Includes bibliographical references (p. 45-48).
Although a number of object recognition techniques have been developed to process LADAR scanned terrain scenes, these techniques have had limited success in target discrimination in part due to low-resolution data and limits in available computation power. We present a pose-independent Automatic Target Detection and Recognition System that uses data from an airborne 3D imaging Ladar sensor. The Automatic Target Recognition system uses geometric shape and size signatures from target models to detect and recognize targets under heavy canopy and camouflage cover in extended terrain scenes. A method for data integration was developed to register multiple scene views to obtain a more complete 3D surface signature of a target. Automatic target detection was performed using the general approach of"3D cueing," which determines and ranks regions of interest within a large-scale scene based on the likelihood that they contain the respective target. Each region of interest is then passed to an ATR algorithm to accurately identify the target from among a library of target models. Automatic target recognition was performed using spin-image surface matching, a pose-independent algorithm that determines correspondences between a scene and a target of interest. Given a region of interest within a large-scale scene...
Recent successes with superconducting Josephson junction qubits make them prime candidates for the implementation of quantum computing. This doctoral thesis details the study of a niobium Josephson junction circuit for quantum computing applications. The thesis covers two main areas: 1) the fabrication of sub-micron niobium Josephson junction devices using a Nb/Al/A1Ox/Nb trilayer process and 2) measurements of unique quantum properties of a superconducting device proposed as a quantum bit--the Persistent Current (PC) qubit. The thesis discusses the fabrication of niobium Josephson junction devices which is integral to the design and measurement of the circuit. The devices were fabricated at MIT Lincoln Laboratory using optical projection lithography to define features. A technique to produce more uniform critical-current densities across a wafer is developed within the scope of the thesis. We also introduce experimental work on the PC qubit performed at dilution refrigerator temperatures (T [approximately] 12mK). Microwave spectroscopy was used to map the energy level separation between macroscopic quantum states of the qubit system. We measured the intrawell energy relaxation time [tau]d between quantum levels in this particular device. The intrawell relaxation measurements are important in determining whether a promising decoherence time can be achieved in Nb-based Josephson devices...
CMOS (Complementary Metal-Oxide-Silicon) imager technology, as compared with mature CCD (Charge-Coupled Device) imager technology, has the advantages of higher circuit integration, lower power consumption, and potentially lower price. The advantages make this technology competent for the next-generation solid-state imaging applications. However, CMOS processes are originally developed for high-performance digital circuits. Fabricating high-quality embedded image sensors with CMOS technologies is not a straightforward task. This motivates the study of CMOS technologies for imaging applications presented in this thesis. The major content of this study can be partitioned into four parts: (a) A two-stage characterization methodology is developed for sensor optimization, including the characterization of large-area photodiodes and comparative analyses on small-dimension sensor arrays with various pixel structures, junction types of the sensors, and other process-related conditions. (b) The mechanism of hot-carrier induced excess minority carriers occurred at the in-pixel transistors is identified and investigated. The influence of the excess carriers on imager performance is analyzed. Suggestions on the pixel design are provided. (c) Signal cross-talk between adjacent pixels is quantified and studied using a sensor array with a specially designed metal shield pattern...
CMOS technology provides an attractive alternative to the currently dominant CCD technology for implementing low-power, low-cost imagers with high levels of integration. Two pixel configurations are possible in CMOS technology: active and passive. The active pixel requires a minimum of three transistors to convert light to voltage. The passive pixel, on the other hand, consists of a single transistor, and its output is in the form of charge. Column-parallel opamps are used to amplify the charge to a voltage output. The main advantage of the passive pixel is a higher fill factor in a given pixel geometry. This advantage becomes increasingly important as we scale to smaller pixel sizes. The higher fill factor comes at a high cost as the charge output on the high impedance node of the column line is susceptible to disturbances, namely a parasitic current and temporal noise. The goal of this thesis is to determine the source and effects of the disturbances on the image sensor characteristics and the repercussions for scaling to high-density arrays. A signal-dependent parasitic current composed of optically-generated carrier diffusion, blooming and subthreshold currents contaminates the pixel output. This parasitic current is detrimental to the imager because a few bright pixels can affect the rest of the pixels on the column line...
A threshold signature or decryption scheme is a distributed implementation of a cryptosystem, in which the secret key is secret-shared among a group of servers. These servers can then sign or decrypt messages by following a distributed protocol. The goal of a threshold scheme is to protect the secret key in a highly fault-tolerant way. Namely, the key remains secret, and correct signatures or decryptions are always computed, even if the adversary corrupts less than a fixed threshold of the participating servers. We show that threshold schemes can be constructed by putting together several simple distributed protocols that implement arithmetic operations, like multiplication or exponentiation, in a threshold setting. We exemplify this approach with two discrete-log based threshold schemes, a threshold DSS signature scheme and a threshold Cramer-Shoup cryptosystem. Our methodology leads to threshold schemes which are more efficient than those implied by general secure multi-party computation protocols. Our schemes take a constant number of communication rounds, and the computation cost per server grows by a factor linear in the number of the participating servers compared to the cost of the underlying secret-key operation. We consider three adversarial models of increasing strength. We first present distributed protocols for constructing threshold cryptosystems secure in the static adversarial model...
In April, 2008 the World Bank and the
Local Government Engineering Department (LGED) commenced a
study with the following objectives: (i) to assess fiduciary
and operational risks in LGED's management of projects,
assets and other resources, and in the Local Government
Division (LGD), Ministry Of Local Government, Rural
Development and Cooperatives' oversight function; (ii)
to evaluate the efficacy of external review of
decision-making by LGED and the LGD; and (iii) to identify
options for future monitoring of operational risks in LGED
and the LGD, and (iv) to prioritize options which are
realistic and available to effectively minimize the major
operational risks identified. This report addresses the last
of these objectives. It is based on discussions in Dhaka
14-20 March with senior LGED staff the Operational Risk
Assessment (ORA) team leader, and follow-up work by LGED
staff through March 30. The report identifies and
categorizes three different types of risks. The first type
includes risks that LGED has the authority to take the
necessary actions to address...
This paper provides a comprehensive insight into current trends and developments in Concurrent Engineering for integrated development of products and processes with the goal of completing the entire cycle in a shorter time, at lower overall cost and with fewer engineering design changes after product release. The evolution and definition of Concurrent Engineering are addressed first, followed by a concise review of the following elements of the concurrent engineering approach to product development: Concept Development: The Front-End Process, identifying Customer Needs and Quality Function Deployment, Establishing Product Specifications, Concept Selection, Product Architecture, Design for Manufacturing, Effective Rapid Prototyping, and The Economics of Product Development. An outline of a computer-based tutorial developed by the authors and other graduate students funded by NASA ( accessible via the world-wide-web ). is provided in this paper. A brief discussion of teamwork for successful concurrent engineering is included, t'ase histories of concurrent engineering implementation at North American and European companies are outlined with references to textbooks authored by Professor Menon and other writers. A comprehensive bibliography on concurrent engineering is included in the paper.
We investigate the feasibility of obtaining highly trustworthy results using
crowdsourcing on complex engineering tasks. Crowdsourcing is increasingly seen
as a potentially powerful way of increasing the supply of labor for solving
society's problems. While applications in domains such as citizen-science,
citizen-journalism or knowledge organization (e.g., Wikipedia) have seen many
successful applications, there have been fewer applications focused on solving
engineering problems, especially those involving complex tasks. This may be in
part because of concerns that low quality input into engineering analysis and
design could result in failed structures leading to loss of life. We compared
the quality of work of the anonymous workers of Amazon Mechanical Turk (AMT),
an online crowdsourcing service, with the quality of work of expert engineers
in solving the complex engineering task of evaluating virtual wind tunnel data
graphs. On this representative complex engineering task, our results showed
that there was little difference between expert engineers and crowdworkers in
the quality of their work and explained reasons for these results. Along with
showing that crowdworkers are effective at completing new complex tasks our
paper supplies a number of important lessons that were learned in the process
of collecting this data from AMT...
We investigate the extent to which advances in the health and life sciences
(HLS) are dependent on research in the engineering and physical sciences (EPS),
particularly physics, chemistry, mathematics, and engineering. The analysis
combines two different bibliometric approaches. The first approach to analyze
the 'EPS-HLS interface' is based on term map visualizations of HLS research
fields. We consider 16 clinical fields and five life science fields. On the
basis of expert judgment, EPS research in these fields is studied by
identifying EPS-related terms in the term maps. In the second approach, a
large-scale citation-based network analysis is applied to publications from all
fields of science. We work with about 22,000 clusters of publications, each
representing a topic in the scientific literature. Citation relations are used
to identify topics at the EPS-HLS interface. The two approaches complement each
other. The advantages of working with textual data compensate for the
limitations of working with citation relations and the other way around. An
important advantage of working with textual data is in the in-depth qualitative
insights it provides. Working with citation relations, on the other hand,
yields many relevant quantitative statistics. We find that EPS research
contributes to HLS developments mainly in the following five ways: new
materials and their properties; chemical methods for analysis and molecular
synthesis; imaging of parts of the body as well as of biomaterial surfaces;
medical engineering mainly related to imaging...
Following established tradition, software engineering today is rooted in a
conceptually centralized way of thinking. The primary SE artifact is a
specification of a machine -- a computational artifact -- that would meet the
(elicited and) stated requirements. Therein lies a fundamental mismatch with
(open) sociotechnical systems, which involve multiple autonomous social
participants or principals who interact with each other to further their
individual goals. No central machine governs the behaviors of the various
We introduce Interaction-Oriented Software Engineering (IOSE) as an approach
expressly suited to the needs of open sociotechnical systems. In IOSE,
specifying a system amounts to specifying the interactions among the principals
as protocols. IOSE reinterprets the classical software engineering principles
of modularity, abstraction, separation of concerns, and encapsulation in a
manner that accords with the realities of sociotechnical systems. To highlight
the novelty of IOSE, we show where well-known SE methodologies, especially
those that explicitly aim to address either sociotechnical systems or the
modeling of interactions among autonomous principals, fail to satisfy the IOSE
Despite its relative youth, computer science has become a well-established discipline, granting over 2% of the bachelors degrees in the United States (U.S. Department of Education, 2010). For this reason, it is important that we understand the nature of computer science and the likely direction for the development of inquiry in computer science in the future. This paper examines several perspectives on the nature of the methods of computer science inquiry. These are empiricist methods, rationalist methods, and an engineering stance. It argues that empiricist and rationalist stances play identifiable roles in the scientific nature of computer science reasoning but that the engineering stance does not. Following the trend in the maturation of other sciences, this paper recommends an overhaul in computer science curricula.