Self-supported freestanding membranes are films that are devoid of any underlying supporting layers. The key advantage of such structures is that, due to the lack of substrate effects - both mechanical and chemical, the true native properties of the material can be probed. This is crucial since many of the studies done on materials that are used as freestanding membranes are done as films clamped to substrates or in the bulk form.
This thesis focuses on the synthesis and fabrication as well as electrical studies of free standing ultrathin < 40nm oxide membranes. It also is one of the first demonstrations for electrically probing nanoscale freestanding oxide membranes. Fabrication of such membranes is non-trivial as oxide materials are often brittle and difficult to handle. Therefore, it requires an understanding of thin plate mechanics coupled with controllable thin film deposition process. Taking things a step further, to electrically probe these membranes required design of complex device architecture and extensive optimization of nano-fabrication processes. The challenges and optimized fabrication method of such membranes are demonstrated.
Three materials are probed in this study, VO2, TiO2, and CeO2. VO2 for understanding structural considerations for electronic phase change and nature of ionic liquid gating...
In this work we study the metal-insulator transition in vanadium dioxide and samarium nickelate and the application of such transitions in electronic devices. Chapter 1 provides an introduction to the Mott metal-insulator transition mechanisms and an overview of the interplay between various degrees of freedom in correlated oxides. The phase transition in vanadium dioxide is presented as an example to emphasize the overarching electron-phonon and electron-electron interaction driven transition mechanisms. In Chapter 2, we describe the growth and structure-functionality relationship of thin film transition metal oxides. Chapter 3 goes on to examine the mechanism of voltage-triggered metal-insulator transition in vanadium dioxide two-terminal threshold switches through dynamic studies. Chapter 4 delves into the mechanism of conductance modulation in electrolyte-gated vanadium dioxide transistors, which reveals the importance of electrochemical effects versus electrostatic effects in these devices. Utilizing the idea of electrochemical doping, we designed and realized a strongly correlated insulating phase in samarium nickel oxide through electron doping with hydrogen and lithium interstitials in Chapter 5. Such techniques can be extended to other materials to achieve reversible and controllable carrier doping with high concentration to study the related physics.; Engineering and Applied Sciences - Applied Physics
In this dissertation, I study the physical behavior of nanoscale magnetic materials and build spin-based transistors that encode information in magnetic domain walls. It can be argued that energy dissipation is the most serious problem in modern electronics, and one that has been resistant to a breakthrough. Wasted heat during computing both wastes energy and hinders further technology scaling. This is an opportunity for physicists and engineers to come up with creative solutions for more energy-efficient computing. I present the device we have designed, called domain wall logic (DW-Logic). Information is stored in the position of a magnetic domain wall in a ferromagnetic wire and read out using a magnetic tunnel junction. This hybrid design uses electrical current as the input and output, keeping the device compatible with charge- based transistors.
I build an iterative model to predict both the micromagnetic and circuit behavior of DW- Logic, showing a single device can operate as a universal gate. The model shows we can build complex circuits including an 18-gate Full Adder, and allows us to predict the device switching energy compared to complementary metal-oxide semiconductor (CMOS) transistors. Comparing ￼15 nm feature nodes...
Semiconductor micro- and nano-cavities are excellent platforms for experimental studies of optical cavities, lasing dynamics, and cavity Quantum Electrodynamics (QED). Common materials for such experiments are narrow bandgap semiconductor materials with well-developed epitaxial growth technologies, such as GaAs and InP, among others. Gallium nitride (GaN) and its alloys are industrially viable materials with wide direct bandgaps, low surface re-combination velocities, and large exciton binding energies, offering the possibility of room temperature realization of light-matter interaction. Controlling light-matter interaction is at the heart of nanophotonic research which leads to ultra-low threshold lasing, photonic qubits, and optical strong coupling. Technologically, due to its blue emission, GaN photonic cavities with indium gallium nitride (InGaN) active mediums serve as efficient light sources for the fast growing photonic industry, optical computing and communication networks, display technology, as well as quantum information processing.
The main challenges in fabricating high quality GaN cavity are due to its chemical inertness and low material quality as a result of strain-induced defects and threading dislocations. In this dissertation...
Plasmonic waves are waves of mobile charge carriers caused by their collective oscillations. They can be excited in solid-state conducting materials and behave distinctively in different numbers of dimensions. With fabrication technologies available for solid-state materials, one can functionalize the dimensional properties by engineering the boundaries and interfaces of the plasmonic wave medium. For instance, plasmonic waves in two-dimensional (2D) conductors, such as semiconductor heterojunction and graphene, exhibit strong subwavelength confinement – with a wavelength about a factor of 100 below the electromagnetic wavelength at the same frequency. Hence, 2D plasmonic devices can be constructed below the diffraction limit of light. To utilize this ultra-subwavelength confinement is the main motivation of this thesis.
This thesis establishes the machinery behind the unique behaviors of 2D plasmons, and compares them to plasmons in higher dimensions, namely plasma oscillations in bulk materials and surface plasmons on conducting-insulating interfaces. The Coulomb restoring force and mobile charge carrier inertia causing the collective oscillations are formulated into a transmission-line model. This formulation is used to engineer ultra-subwavelength plasmonic circuits in gigahertz integrated electronics and terahertz metamaterials.
As one of the demonstration platforms...
In 1972 the ionized cluster beam (ICB) deposition technique was introduced as a new method for thin film deposition. At that time the use of clusters was postulated to be able to enhance film nucleation and adatom surface mobility, resulting in high quality films. Although a few researchers reported singly ionized clusters containing 10$sp2$-10$sp3$ atoms, others were unable to repeat their work. The consensus now is that film effects in the early investigations were due to self-ion bombardment rather than clusters. Subsequently in recent work (early 1992) synthesis of large clusters of zinc without the use of a carrier gas was demonstrated by Gspann and repeated in our laboratory. Clusters resulted from very significant changes in two source parameters. Crucible pressure was increased from the earlier 2 Torr to several thousand Torr and a converging-diverging nozzle 18 mm long and 0.4 mm in diameter at the throat was used in place of the 1 mm x 1 mm nozzle used in the early work. While this is practical for zinc and other high vapor pressure materials it remains impractical for many materials of industrial interest such as gold, silver, and aluminum. The work presented here describes results using gold and silver at pressures of around 1 and 50 Torr in order to study the effect of the pressure and nozzle shape. Significant numbers of large clusters were not detected. Deposited films were studied by atomic force microscopy (AFM) for roughness analysis...
Clusters are aggregations of atoms or molecules, generally intermediate in size between individual atoms and aggregates that are large enough to be called bulk matter. Clusters can also be called nanoparticles, because their size is on the order of nanometers or tens of nanometers. A new field has begun to take shape called nanostructured materials which takes advantage of these atom clusters. The ultra-small size of building blocks leads to dramatically different properties and it is anticipated that such atomically engineered materials will be able to be tailored to perform as no previous material could.^ The idea of ionized cluster beam (ICB) thin film deposition technique was first proposed by Takagi in 1972. It was based upon using a supersonic jet source to produce, ionize and accelerate beams of atomic clusters onto substrates in a vacuum environment. Conditions for formation of cluster beams suitable for thin film deposition have only recently been established following twenty years of effort. Zinc clusters over 1,000 atoms in average size have been synthesized both in our lab and that of Gspann. More recently, other methods of synthesizing clusters and nanoparticles, using different types of cluster sources, have come under development.^ In this work...
Small errors proved catastrophic. Our purpose to remark that a very small cause which escapes our notice determined a considerable effect that we cannot fail to see, and then we say that the effect is due to chance. Small differences in the initial conditions produce very great ones in the final phenomena. A small error in the former will produce an enormous error in the latter. When dealing with any kind of electrical device specification, it is important to note that there exists a pair of test conditions that define a test: the forcing function and the limit. Forcing functions define the external operating constraints placed upon the device tested. The actual test defines how well the device responds to these constraints. Forcing inputs to threshold for example, represents the most difficult testing because this put those inputs as close as possible to the actual switching critical points and guarantees that the device will meet the Input-Output specifications. ^ Prediction becomes impossible by classical analytical analysis bounded by Newton and Euclides. We have found that non linear dynamics characteristics is the natural state of being in all circuits and devices. Opportunities exist for effective error detection in a nonlinear dynamics and chaos environment. ^ Nowadays there are a set of linear limits established around every aspect of a digital or analog circuits out of which devices are consider bad after failing the test. Deterministic chaos circuit is a fact not a possibility as it has been revived by our Ph.D. research. In practice for linear standard informational methodologies...
This dissertation is about the research carried on developing an MPS (Multipurpose Portable System) which consists of an instrument and many accessories. The instrument is portable, hand-held, and rechargeable battery operated, and it measures temperature, absorbance, and concentration of samples by using optical principles. The system also performs auxiliary functions like incubation and mixing. This system can be used in environmental, industrial, and medical applications. ^ Research emphasis is on system modularity, easy configuration, accuracy of measurements, power management schemes, reliability, low cost, computer interface, and networking. The instrument can send the data to a computer for data analysis and presentation, or to a printer. ^ This dissertation includes the presentation of a full working system. This involved integration of hardware and firmware for the micro-controller in assembly language, software in C and other application modules. ^ The instrument contains the Optics, Transimpedance Amplifiers, Voltage-to-Frequency Converters, LCD display, Lamp Driver, Battery Charger, Battery Manager, Timer, Interface Port, and Micro-controller. ^ The accessories are a Printer, Data Acquisition Adapter (to transfer the measurements to a computer via the Printer Port and expand the Analog/Digital conversion capability)...
Global connectivity, for anyone, at anyplace, at anytime, to provide high-speed, high-quality, and reliable communication channels for mobile devices, is now becoming a reality. The credit mainly goes to the recent technological advances in wireless communications comprised of a wide range of technologies, services, and applications to fulfill the particular needs of end-users in different deployment scenarios (Wi-Fi, WiMAX, and 3G/4G cellular systems). In such a heterogeneous wireless environment, one of the key ingredients to provide efficient ubiquitous computing with guaranteed quality and continuity of service is the design of intelligent handoff algorithms. ^ Traditional single-metric handoff decision algorithms, such as Received Signal Strength (RSS) based, are not efficient and intelligent enough to minimize the number of unnecessary handoffs, decision delays, and call-dropping and/or blocking probabilities. ^ This research presented a novel approach for the design and implementation of a multi-criteria vertical handoff algorithm for heterogeneous wireless networks. Several parallel Fuzzy Logic Controllers were utilized in combination with different types of ranking algorithms and metric weighting schemes to implement two major modules: the first module estimated the necessity of handoff...
Inverters play key roles in connecting sustainable energy (SE) sources to the local loads and the ac grid. Although there has been a rapid expansion in the use of renewable sources in recent years, fundamental research, on the design of inverters that are specialized for use in these systems, is still needed. Recent advances in power electronics have led to proposing new topologies and switching patterns for single-stage power conversion, which are appropriate for SE sources and energy storage devices. The current source inverter (CSI) topology, along with a newly proposed switching pattern, is capable of converting the low dc voltage to the line ac in only one stage. Simple implementation and high reliability, together with the potential advantages of higher efficiency and lower cost, turns the so-called, single-stage boost inverter (SSBI), into a viable competitor to the existing SE-based power conversion technologies.^ The dynamic model is one of the most essential requirements for performance analysis and control design of any engineering system. Thus, in order to have satisfactory operation, it is necessary to derive a dynamic model for the SSBI system. However, because of the switching behavior and nonlinear elements involved...
Modern power networks incorporate communications and information technology infrastructure into the electrical power system to create a smart grid in terms of control and operation. The smart grid enables real-time communication and control between consumers and utility companies allowing suppliers to optimize energy usage based on price preference and system technical issues. The smart grid design aims to provide overall power system monitoring, create protection and control strategies to maintain system performance, stability and security. ^ This dissertation contributed to the development of a unique and novel smart grid test-bed laboratory with integrated monitoring, protection and control systems. This test-bed was used as a platform to test the smart grid operational ideas developed here. The implementation of this system in the real-time software creates an environment for studying, implementing and verifying novel control and protection schemes developed in this dissertation. Phasor measurement techniques were developed using the available Data Acquisition (DAQ) devices in order to monitor all points in the power system in real time. This provides a practical view of system parameter changes, system abnormal conditions and its stability and security information system. These developments provide valuable measurements for technical power system operators in the energy control centers. Phasor Measurement technology is an excellent solution for improving system planning...
Modern electric machine drives, particularly three phase permanent magnet machine drive systems represent an indispensable part of high power density products. Such products include; hybrid electric vehicles, large propulsion systems, and automation products. Reliability and cost of these products are directly related to the reliability and cost of these systems. The compatibility of the electric machine and its drive system for optimal cost and operation has been a large challenge in industrial applications. The main objective of this dissertation is to find a design and control scheme for the best compromise between the reliability and optimality of the electric machine-drive system. The effort presented here is motivated by the need to find new techniques to connect the design and control of electric machines and drive systems. ^ A highly accurate and computationally efficient modeling process was developed to monitor the magnetic, thermal, and electrical aspects of the electric machine in its operational environments. The modeling process was also utilized in the design process in form finite element based optimization process. It was also used in hardware in the loop finite element based optimization process. The modeling process was later employed in the design of a very accurate and highly efficient physics-based customized observers that are required for the fault diagnosis as well the sensorless rotor position estimation. Two test setups with different ratings and topologies were numerically and experimentally tested to verify the effectiveness of the proposed techniques. ^ The modeling process was also employed in the real-time demagnetization control of the machine. Various real-time scenarios were successfully verified. It was shown that this process gives the potential to optimally redefine the assumptions in sizing the permanent magnets of the machine and DC bus voltage of the drive for the worst operating conditions. ^ The mathematical development and stability criteria of the physics-based modeling of the machine...
This work presents the development of an in-plane vertical micro-coaxial probe using bulk micromachining technique for high frequency material characterization. The coaxial probe was fabricated in a silicon substrate by standard photolithography and a deep reactive ion etching (DRIE) technique. The through-hole structure in the form of a coaxial probe was etched and metalized with a diluted silver paste. A co-planar waveguide configuration was integrated with the design to characterize the probe. The electrical and RF characteristics of the coaxial probe were determined by simulating the probe design in Ansoft's High Frequency Structure Simulator (HFSS). The reflection coefficient and transducer gain performance of the probe was measured up to 65 GHz using a vector network analyzer (VNA). The probe demonstrated excellent results over a wide frequency band, indicating its ability to integrate with millimeter wave packaging systems as well as characterize unknown materials at high frequencies. ^ The probe was then placed in contact with 3 materials where their unknown permittivities were determined. To accomplish this, the coaxial probe was placed in contact with the material under test and electromagnetic waves were directed to the surface using the VNA...
A novel and new thermal management technology for advanced ceramic microelectronic packages has been developed incorporating miniature heat pipes embedded in the ceramic substrate. The heat pipes use an axially grooved wick structure and water as the working fluid. Prototype substrate/heat pipe systems were fabricated using high temperature co-fired ceramic (alumina). The heat pipes were nominally 81 mm in length, 10 mm in width, and 4 mm in height, and were charged with approximately 50–80 μL of water. Platinum thick film heaters were fabricated on the surface of the substrate to simulate heat dissipating electronic components. Several thermocouples were affixed to the substrate to monitor temperature. One end of the substrate was affixed to a heat sink maintained at constant temperature. The prototypes were tested and shown to successful and reliably operate with thermal loads over 20 Watts, with thermal input from single and multiple sources along the surface of the substrate. Temperature distributions are discussed for the various configurations and the effective thermal resistance of the substrate/heat pipe system is calculated. Finite element analysis was used to support the experimental findings and better understand the sources of the system's thermal resistance. ^
Modern datasets are often massive due to the sharp decrease in the cost of collecting and storing data. Many are endowed with relational structure modeled by a graph, an object comprising a set of points and a set of pairwise connections between them. A ``signal on a graph'' has elements related to each other through a graph---it could model, for example, measurements from a sensor network. In this dissertation we study several problems in signal processing and inference on graphs.
We begin by introducing an analogue to Heisenberg's time-frequency uncertainty principle for signals on graphs. We use spectral graph theory and the standard extension of Fourier analysis to graphs. Our spectral graph uncertainty principle makes precise the notion that a highly localized signal on a graph must have a broad spectrum, and vice versa.
Next, we consider the problem of detecting a random walk on a graph from noisy observations. We characterize the performance of the optimal detector through the (type-II) error exponent, borrowing techniques from statistical physics to develop a lower bound exhibiting a phase transition. Strong performance is only guaranteed when the signal to noise ratio exceeds twice the random walk's entropy rate. Monte Carlo simulations show that the lower bound is quite close to the true exponent.
Analyzing images to infer physical scene properties is a fundamental task in computer vision. It is by nature an ill-posed inverse problem, because imaging is a complicated, information-lossy physical and measurement process that cannot be deterministically inverted. This dissertation presents theory and algorithms for handling ambiguities in a variety of low-level vision problems. They are based on two key ideas: (1) explicitly modeling and reporting uncertainties are beneficial to visual inference; and (2) using local models can significantly reduce ambiguities that would exist in pixelwise analysis.
In the first part of the dissertation, we study the color measurement pipeline of consumer digital cameras, and consider the inherent uncertainty of undoing the effects of tone-mapping. We introduce statistical models for this uncertainty and algorithms for fitting it to given cameras or imaging pipelines. Once fit, the model provides for each tone-mapped color a probability distribution over linear scene colors that could have induced it, which is demonstrated to be useful for a number of downstream inference applications.
In the second part of the dissertation, we study the pixelwise ambiguities in physics-based visual inference and present theory and algorithms for employing local models to eliminate or reduce these ambiguities. In shape from shading...
The continued march of technological progress, epitomized by Moore’s Law provides the microarchitect with increasing numbers of transistors to employ as we continue to shrink feature geometries. Physical limitations impose new constraints upon designers in the areas of overall power and localized power density. Techniques to scale threshold and supply voltages to lower values in order to reduce power consumption of the part have also run into physical limitations, exacerbating power and cooling problems in deep sub-micron CMOS process generations. Smaller device geometries are also subject to increased sensitivity to common failure modes as well as manufacturing process variability.
In the face of these added challenges, we observe a shift in the focus of the industry, away from building ever–larger single–core chips, whose focus is on reducing single–threaded latency toward a design approach that employs multiple cores on a single chip to improve throughput. While the early multicore era utilized the existing single–core designs of the previous generation in small numbers, subsequent generations have introduced cores tailored to multicore use. These cores seek to achieve power-efficient throughput and have led to a new emphasis on throughput-oriented computing...
A series of fusion techniques are developed and applied to EEG and
pupillary recording analysis in a rapid serial visual presentation
(RSVP) based image triage task, in order to improve the accuracy
of capturing single-trial neural/pupillary signatures (patterns)
associated with visual target detection.
The brain response to visual stimuli is not a localized pulse,
instead it reflects time-evolving neurophysiological activities
distributed selectively in the brain. To capture the evolving
spatio-temporal pattern, we divide an extended (``global") EEG
data epoch, time-locked to each image stimulus onset, into
multiple non-overlapping smaller (``local") temporal windows.
While classifiers can be applied on EEG data located in multiple
local temporal windows, outputs from local classifiers can be
fused to enhance the overall detection performance.
According to the concept of induced/evoked brain rhythms, the EEG
response can be decomposed into different oscillatory components
and the frequency characteristics for these oscillatory components
can be evaluated separately from the temporal characteristics.
While the temporal-based analysis achieves fairly accurate
detection performance, the frequency-based analysis can improve
the overall detection accuracy and robustness further if
frequency-based and temporal-based results are fused at the
Pupillary response provides another modality for a single-trial
image triage task. We developed a pupillary response feature
construction and selection procedure to extract/select the useful
features that help to achieve the best classification performance.
The classification results based on both modalities (pupillary and
EEG) are further fused at the decision level. Here...
The relentless scaling of semiconductor devices and high integration levels have lead to a steady increase in the cost of manufacturing test for integrated circuits (ICs). The higher test cost leads to an increase in the product cost of ICs. Product cost is a major driver in the consumer electronics market, which is characterized by low profit margins and the use of a variety of core-based system-on-chip (SoC) designs. Packaging has also been recognized as a significant contributor to the product cost for SoCs. Packaging cost and the test cost for packaged chips can be reduced significantly by the use of effective test methods at the wafer level, also referred to as wafer sort.
Test application time is a major practical constraint for wafer sort, even more than for package test. Therefore, not all the scan-based digital test patterns can be applied to the die under test. This thesis first presents a test-length selection technique for wafer-level testing of core-based SoCs. This optimization technique, which is based on a combination of statistical yield modeling and integer linear programming (ILP), provides the pattern count for each embedded core during wafer sort such that the probability of screening defective dies is maximized for a given upper limit on the SoC test time. A large number of wafer-probe contacts can potentially lead to higher yield loss during wafer sort. An optimization framework is therefore presented to address test access mechanism (TAM) optimization and test-length selection for wafer-level testing...