Free-form surface machining is a fundamental but time-consuming process in modern manufacturing. The central question we ask in this thesis is how to reduce the time that it takes for a 5-axis CNC (Computer Numerical Control) milling machine to sweep an entire free-form surface in its finishing stage. We formulate a non-classical variational time-optimization problem defined on a 2-dimensional manifold subject to both equality and inequality constraints. The machining time is the cost functional in this optimization problem. We seek for a preferable vector field on a surface to obtain skeletal information on the toolpaths. This framework is more amenable to the techniques of continuum mechanics and differential geometry rather than to path generation and conventional CAD/CAM (Computer Aided Design and Manufacturing) theory. After the formulation, this thesis derives the necessary conditions for optimality. We decompose the problem into a series of optimization problems defined on 1-dimensional streamlines of the vector field and, as a result, simplify the problem significantly. The anisotropy in kinematic performance has a practical importance in high-speed machining. The greedy scheme, which this thesis implements for a parallel hexapod machine tool...
The purpose of this study is to understand on atomic level the structural response of zircon (ZrSiO4) to irradiation using molecular dynamics (MD) computer simulations, and to develop topological models that can describe these structural changes. Topological signatures, encoded using the concepts of primitive-rings and local clusters, were developed and used to differentiate crystalline and non-crystalline atoms in various zircon structures. Since primitive-rings and local clusters are general concepts applicable to all materials, and the algorithms to systematically identify them are well-established, topological signatures based on them are easy to implement and the method of topological signatures is applicable to all structures. The method of topological signatures is better than the Wigner-Seitz cell method, which depends on the original crystalline reference grid that is unusable in heavily damaged structures or regions; it is also better than those methods based only on local structures limited to first coordination shell, since one can decide whether or not to include ring contents of large rings into the topological signatures, effectively controlling the range of the topological signatures. The early-stage evolution of non-crystalline disorder and the subsequent recrystallization in zircon collision cascade simulations were successfully modeled by using the topological signatures to identify non-crystalline atoms. Simply using the number of displaced atoms was unable to correctly show the initial peak of structural damage followed by the subsequent annealing stage. Using the topological signatures...
The MIT research reactor (MITR) is converting from the existing high enrichment uranium (HEU) core to a low enrichment uranium (LEU) core using a high-density monolithic UMo fuel. The design of an optimum LEU core for the MIT reactor is evolving. The objectives of this study are to benchmark the in-house computer code for the MITR, and to perform the thermal hydraulic analyses in support of the LEU design studies. The in-house multi-channel thermal-hydraulics code, MULCH-II, was developed specifically for the MITR. This code was validated against PLTEMP for steady-state analysis, and RELAP5 and temperature measurements for the loss of primary flow transient. Various fuel configurations are evaluated as part of the LEU core design optimization study. The criteria adopted for the LEU thermal hydraulics analysis for this study are the limiting safety system settings (LSSS), to prevent onset of nucleate boiling during steady-state operation, and to avoid a clad temperature excursion during the loss of flow transient. The benchmark analysis results showed that the MULCH-II code is in good agreement with other computer codes and experimental data, and hence it is used as the main tool for this study. In ranking the LEU core design options...
Surfactants, or surface active agents, are chemicals exhibiting amphiphilic behavior toward a solvent. This amphiphilic character leads to increased activity at interfaces and to self-assembly into micellar aggregates beyond a threshold surfactant concentration, referred to as the critical micelle concentration (CMC), in bulk solutions. As a result of these unique attributes, surfactants are used in many pharmaceutical, industrial, and environmental applications, including biological separations, fat metabolism during digestion, drug delivery, and water purification. Selection of the appropriate surfactant for a given application is often motivated by the need to control bulk solution micellization properties, such as the CMC and the micelle shape and size. The ability to make molecular-level predictions of these surfactant properties would allow formulators in industry to speed up the design and optimization of new surfactant formulations. In this thesis, a combined computer simulation/molecular-thermodynamic (CS-MT) modeling approach was developed and utilized to study the micellization behavior of ionic branched surfactants, which are a class of surfactants of great industrial relevance in applications such as detergency, emulsification...
peer-reviewed; There has been a growing interest in the role of
theory within Software Engineering (SE) research. For several
decades, researchers within the SE research community have
argued that, to become a real engineering science, SE needs
to develop stronger theoretical foundations. A few authors have
proposed guidelines for constructing theories, building on insights
from other disciplines. However, so far, much SE research is not
guided by explicit theory, nor does it produce explicit theory. In
this paper we argue that SE research does, in fact, show traces
of theory, which we call theory fragments. We have adapted an
analytical framework from the social sciences, named the Validity
Network Schema (VNS), that we use to illustrate the role of
theorizing in SE research. We illustrate the use of this framework
by dissecting three well known research papers, each of which
has had significant impact on their respective subdisciplines. We
conclude this paper by outlining a number of implications for
future SE research, and show how by increasing awareness and
training, development of SE theories can be improved.
peer-reviewed; Contemporary robotics relies on the most recent advances in automation and robotic technologies to promote autonomy and autonomic computing principles to robotized systems. However, it appears that the design and implementation of autonomous systems is an extremely challenging task. The problem is stemming from the very nature of such systems where features like environment monitoring and self-monitoring allow for awareness capabilities driving the system behavior. Moreover, changes in the operational environment may trigger self-adaptation. The first and one of the biggest challenges in the design and implementation of such systems is how to handle requirements specifically related to the autonomy of a system. Requirements engineering for autonomous systems appears to be a wide open research area with only a limited number of approaches yet considered. In this paper, we present an approach to Autonomy Requirements Engineering where goals models are merged with special generic autonomy requirements. The approach helps us identify and record the autonomy requirements of a system in the form of special self-* objectives and other assistive requirements, those capturing alternative objectives the system may pursue in the presence of factors threatening the achievement of the initial system goals. The paper presents a case study where autonomy requirements engineering is applied to the domain of space missions.
peer-reviewed; Global software development is a business model that involves software development distributed beyond national boundaries. However, distributed nature of the processes makes it very challenging to communicate and collaborate. Requirements engineering is an intensive software development life cycle activity and involves frequent communication among the stakeholders. In global software development, tight project schedules and global distance give rise to incomplete requirements handovers from one site to another. Therefore, the need for an efficient mechanism becomes inevitable as information available to one project team can often contradict what is available to the other.
On the other hand, Software as a Service (SaaS) is one of the deployment models of the cloud that can provide multiple users with a web space to collaborate on things of mutual interest. In this research, we propose a SaaS based mechanism to facilitate globally distributed software development teams working on the requirements engineering process. Our emphasis is on the situation that occurs after requirements are handed to another software development site.
Execution time estimation plays an important role in computer system design. It is particularly critical in real-time system design, where to meet a deadline can be as important as to ensure the logical correctness of a program. To accurately estimate the execution time of a program can be extremely challenging, since the execution time of a program varies with inputs, the underlying computer architectures, and run-time dynamics, among other factors. The problem becomes even more challenging as computing systems moving from single core to multi-core platforms, with more hardware resources shared by multiple processing cores.
The goal of this research is to investigate the relationship between the execution time of a program and the underlying architecture features (e.g. cache size, associativity, memory latency), as well as its run-time characteristics (e.g. cache miss ratios), and based on which, to estimate its execution time on a multi-core platform based on a regression approach. We developed our test platform based on GEM5, an open-source multi-core cycle-accurate simulation tool set. Our experimental results show clearly the strong relationship of the program execution time to architecture features and run-time characteristics. Moreover...
The focus of this thesis is placed on text data compression based on the fundamental coding scheme referred to as the American Standard Code for Information Interchange or ASCII. The research objective is the development of software algorithms that result in significant compression of text data. Past and current compression techniques have been thoroughly reviewed to ensure proper contrast between the compression results of the proposed technique with those of existing ones. The research problem is based on the need to achieve higher compression of text files in order to save valuable memory space and increase the transmission rate of these text files. It was deemed necessary that the compression algorithm to be developed would have to be effective even for small files and be able to contend with uncommon words as they are dynamically included in the dictionary once they are encountered. A critical design aspect of this compression technique is its compatibility to existing compression techniques. In other words, the developed algorithm can be used in conjunction with existing techniques to yield even higher compression ratios. This thesis demonstrates such capabilities and such outcomes, and the research objective of achieving higher compression ratio is attained.
This work consists on the design and implementation of a complete monitored security system. Two computers make up the basic system: one computer is the transmitter and the other is the receiver. Both computers interconnect by modems. Depending on the status of the input sensors (magnetic contacts, motion detectors and others) the transmitter detects an alarm condition and sends a detailed report of the event via modem to the receiver computer.
This paper introduces an important new canonical set of functions for solving Lanchester-type equations of modern warfare for combat between two homogeneous forces with power attrition-rate coefficients with "no effect." Tabulations of these functions, which we call Lanchester-Clifford-Schlafli (or LCS) functions, allow one to study this particular variable-coefficient model almost as easily and thoroughly as Lanchester's classic constant-coefficient one. The availability of such tables is pointed out. We show that our choice of LCS functions allows one to obtain important information (in particular, force-annihilation prediction) without having to spend the time and effort to compute force-level trajectories. Furthermore, we show from theoretical considerations that our choice is the best for this purpose. These new theoretical considerations apply in general to Lanchester-type equations of modern warfare and provide guidance for developing other canonical Lanchester functions (i.e. canonical functions for other attrition-rate coefficients). Moreover, our new LCS functions provide valuable information about various related variable-coefficient models. Also, we introduce an important transformation of the battle's time scale that not only many times simplifies the force-level equations but also shows that relative fire effectiveness and intensity of combat are the only two weapon-system parameters determining the course of such variable-coefficient Lanchester-type combat. (Author); Naval Postgraduate School...
This thesis studies the feasibility of developing a smart shipboard sensor network. The objective of the thesis is to prove that sensors can be made smart by keeping calibration constants and other relevant data such as network information stored on the sensor and a server computer. Study will focus on the design and implementation of an Ipsil IP(micro)8930 microcontroller, which is then connected, by the standard TCP/IP implementation, to a network where the sensor information can be seen using a web page. The information to make the sensor "smart" will be stored on the Ipsil chip and server computer and can by accessed by a HTML based program. By taking pre-computed calibration constants that minimize the measurement errors and writing them through the web page stored in the Ipsil chip's EEPROM, the calibrated sensor reading can be calculated. The expected contribution from the research effort would be a reduction in manpower, increased efficiency, and a greater awareness of plant and equipment operation among naval vessels, specifically the DDX. Hardware is relatively inexpensive, reliable, and COTS (Commercial Off the Shelf) available. If implemented, a Smart Shipboard Sensor Network would allow the watch standers, CHENG...
This thesis describes the design of an interactive Web-based course, namely EC4810 Fault Tolerant Computing, taught in the Department of Electrical and Computer Engineering (ECE), at the Naval Postgraduate School. It is part of the ECE Department's Distributed Learning program in which students will use multimedia enhanced online courses through the Web. A major accomplishment of this thesis is the development of a template for other courses A step-by-step guide has been developed that outlines the process of online course maintenance and procedures for producing other courses.; Republic of Singapore Air Force author
The primary purpose of this thesis was to design and develop a prototype e-commerce system where dynamic parameters are included in the decision-making process and execution of an online transaction. The system developed and implemented takes into account previous usage history, priority and associated engineering capabilities. The system was developed using three-tiered client server architecture. The interface was the Internet browser. The middle tiered web server was implemented using Active Server Pages, which form a link between the client system and other servers. A relational database management system formed the data component of the three-tiered architecture. It includes a capability for data warehousing which extracts needed information from the stored data of the customers as well as their orders. The system organizes and analyzes the data that is generated during a transaction to formulate a client's behavior model during and after a transaction. This is used for making decisions like pricing, order rescheduling during a client's forthcoming transaction. The system helps among other things to bring about predictability to a transaction execution process, which could be highly desirable in the current competitive scenario.
(cont.) implementation of this scintigraphic method for quantitative studies of osteoblast-mediated mineralization in vitro. A 2-D truss finite element model is used to study the remodeling of trabecular bone. Using strain energy density (SED) as the optimization object and the trabecular width as the optimization variable, an optimal structure with minimum SED was achieved. This structure is similar to real bone in the dense outside, porous inside, and orientation of the trabeculae. The bone density distribution pattern also matched with previous result by other people. Different implants were introduced to simulate the replacement for the femoral head. It has been proved that the difference in Young's modulus between bone and implant materials is the main reason for the long-term bone loss (stress screening). This problem can be alleviated by proper implant design and resurfacing instead of replacing the whole femoral head.; Initial fixation with bone and the long term bone loss are two main problems associated with total hip replacement (THR), which are studied by electron microscope and computer simulation in this thesis. Bare Titanium-6 wt% Aluminum-4 wt% Vanadium (Ti64) implants, Ti64 implants with plasma-sprayed hydroxyapatite (PSHA)...
Engineering activities involve large groups of people from different domains
and disciplines. They often generate important information flows that are
difficult to manage. To face these difficulties, a knowledge engineering
process is necessary to structure the information and its use. This paper
presents a deployment of a knowledge capitalization process based on the
enrichment of MOKA methodology to support the integration of Process Planning
knowledge in a CAD System. Our goal is to help different actors to work
collaboratively by proposing one referential view of the domain, the context
and the objectives assuming that it will help them in better decision-making.
Studying materials informatics from a data mining perspective can be
beneficial for manufacturing and other industrial engineering applications.
Predictive data mining technique and machine learning algorithm are combined to
design a knowledge discovery system for the selection of engineering materials
that meet the design specifications. Predictive method-Naive Bayesian
classifier and Machine learning Algorithm - Pearson correlation coefficient
method were implemented respectively for materials classification and
selection. The knowledge extracted from the engineering materials data sets is
proposed for effective decision making in advanced engineering materials design
applications.; Comment: 12 pages, 8 figures; International Journal of Database Management
Systems (IJDMS), Vol.3, No.1, February 2011. arXiv admin note: text overlap
with arXiv:1206.3078 by other authors
We are developing the Virtual Experiences (Vx)Lab, a research and research
training infrastructure and capability platform for global collaboration. VxLab
comprises labs with visualisation capabilities, including underpinning
networking to global points of presence, videoconferencing and high-performance
computation, simulation and rendering, and sensors and actuators such as
robotic instruments locally and in connected remote labs. VxLab has been used
for industry projects in industrial automation, experimental research in cloud
deployment, workshops and remote capability demonstrations, teaching
advanced-level courses in games development, and student software engineering
projects. Our goal is for resources to become a "catalyst" for IT-driven
research results both within the university and with external industry
partners. Use cases include: multi-disciplinary collaboration, prototyping and
troubleshooting requiring multiple viewpoints and architectures, dashboards and
decision support for global remote planning and operations.; Comment: This is a pre-print of an extended talk abstract accepted for
eResearch Australasia, Brisbane, October 2015
We investigate ten years of user support emails in the large-scale solver
library PETSc in order to identify changes in user requests. For this purpose
we assign each email thread to one or several categories describing the type of
support request. We find that despite several changes in hardware architecture
as well programming models, the relative share of emails for the individual
categories does not show a notable change over time. This is particularly
remarkable as the total communication volume has increased four-fold in the
considered time frame, indicating a considerable growth of the user base. Our
data also demonstrates that user support cannot be substituted with what is
often referred to as 'better documentation' and that the involvement of core
developers in user support is essential.; Comment: 2 pages, 1 figure, whitepaper for the workshop "Computational Science
& Engineering Software Sustainability and Productivity Challenges"