Semi-supervised learning is one of the important topics in machine learning, concerning with pattern classification where only a small subset of data is labeled. In this paper, a new network-based (or graph-based) semi-supervised classification model is proposed. It employs a combined random-greedy walk of particles, with competition and cooperation mechanisms, to propagate class labels to the whole network. Due to the competition mechanism, the proposed model has a local label spreading fashion, i.e., each particle only visits a portion of nodes potentially belonging to it, while it is not allowed to visit those nodes definitely occupied by particles of other classes. In this way, a "divide-and-conquer" effect is naturally embedded in the model. As a result, the proposed model can achieve a good classification rate while exhibiting low computational complexity order in comparison to other network-based semi-supervised algorithms. Computer simulations carried out for synthetic and real-world data sets provide a numeric quantification of the performance of the method.; State of Sao Paulo Research Foundation (FAPESP); Brazilian National Council of Technological and Scientific Development (CNPq)
Hologramas podem ser produzidos utilizando-se técnicas tradicionais de holografia ou podem ser gerados também por computador, conhecidos como hologramas gerados por computador (HGCs). A maioria destes hologramas opera usando luz monocromática. Por outro lado, os hologramas podem também operar com luz branca. Estes elementos de luz branca são usados em diversas aplicações, como segurança, para verificar a autenticidade dos cartões de crédito e outros documentos, porque seus processos de fabricação são difíceis e caros de serem reproduzidos. Entretanto, os hologramas de luz branca convencionais operam baseados na reflexão da luz, e apresentam alguns efeitos indesejáveis, como distorções cromáticas, como o efeito rainbow. Neste trabalho foi proposto um elemento óptico difrativo de luz branca gerado por computador. O elemento é calculado baseado na técnica de halftoning e na coerência espacial parcial de uma fonte de luz branca estendida. Os elementos da fase são produzidos através de técnicas de fabricação bem estabelecidas de circuitos integrados, e as simulações óticas são apresentadas. Não há necessidade de métodos iterativos. Os resultados das reconstruções ópticas e simuladas deste elemento de luz branca são muito semelhantes e produzem imagens nítidas...
A Web 2.0 alterou o desenvolvimento de aplicações para internet. Contudo, os pesquisadores e desenvolvedores ainda replicam as ideias uns dos outros com pouco reuso. Esse cenário ilustra a necessidade de uma engenharia de domínio, na qual as similaridades e as variabilidades de uma família de aplicações são identificadas e documentadas, com a finalidade de obter o reuso dos componentes desenvolvidos. Neste trabalho, e feita uma engenharia de domínio para Redes Sociais na Web 2.0, com o foco nas funcionalidades colaborativas relativas ao compartilhamento de conteúdo. Como método, e utilizado o FODA (Feature Oriented Domain Analysis) adaptado com o modelo 3C de colaboração para classificar e padrões para interação mediada por computador para descrever as funcionalidades colaborativas. No modelo 3C, a colaboração e analisada a partir da comunicação, coordenação e cooperacao, e padroes descrevem e detalham o contexto de uso das funcionalidades levantadas. Para a implementação das funcionalidades colaborativas comuns nessas aplicações, são desenvolvidos componentes de software compatíveis com a plataforma Groupware Workbench. Um experimento foi realizado para avaliar os artefatos gerados na engenharia de domínio e um estudo de caso para avaliar a aplicabilidade e abrangência dos componentes desenvolvidos em um contexto real...
The recent trend of using fine water mist systems to replace the legacy HALON- 1301 fire suppression systems warrants further study into other applications of the water mist systems. Preliminary research and investigation indicates that fine mists (20-25 pm droplet size) may reduce peak overpressures of a shock wave traveling through a space. Such pressure reductions could be used to mitigate the destructive effects of a shock wave (initiated by an explosive device) traveling through a structure. Currently these blast mitigation effects have only been demonstrated in small-scale shock tube tests and computer simulations. Uncertainty exists as to the scalability of such a system. The intention of this research is to investigate the applicability of such a blast mitigation system for shipboard use. Study into the degree of mitigation necessary to make a system practical for shipboard installation was conducted. In addition, a theoretical study of the mechanisms of blast mitigation using water mists was completed. Preliminary design of a full-scale system was examined.; (cont.) Given the recent trend toward tumblehome hull forms in future Naval Combatant designs, there exists strong applicability of this system in the "dead" spaces created by the shaping of the tumblehome hull. Further work is needed in numerical modeling and laboratory testing of specific phases of the mitigation. The end goal is a feasible design of a blast mitigation system to be used in the outermost spaces of Naval Combatants to protect interior vital system spaces.; by Julie A. Kitchenka.; Thesis (Nav. E.)--Massachusetts Institute of Technology...
Cette thèse a pour but d’améliorer l’automatisation dans l’ingénierie dirigée par les modèles (MDE pour Model Driven Engineering). MDE est un paradigme qui promet de réduire la complexité du logiciel par l’utilisation intensive de modèles et des transformations automatiques entre modèles (TM). D’une façon simplifiée, dans la vision du MDE, les spécialistes utilisent plusieurs modèles pour représenter un logiciel, et ils produisent le code source en transformant automatiquement ces modèles. Conséquemment, l’automatisation est un facteur clé et un principe fondateur de MDE. En plus des TM, d’autres activités ont besoin d’automatisation, e.g. la définition des langages de modélisation et la migration de logiciels.
Dans ce contexte, la contribution principale de cette thèse est de proposer une approche générale pour améliorer l’automatisation du MDE. Notre approche est basée sur la recherche méta-heuristique guidée par les exemples.
Nous appliquons cette approche sur deux problèmes importants de MDE, (1) la transformation des modèles et (2) la définition précise de langages de modélisation. Pour le premier problème, nous distinguons entre la transformation dans le contexte de la migration et les transformations générales entre modèles. Dans le cas de la migration...
Procedures for the ergonomic use of desktop computer technology are well documented. The design of computer workstations, positioning of the body, and ergonomic work practices have received a great deal of attention and the relevant ergonomic principles are extensively covered in books, manuals, information guides, and web sites. Despite the proliferation of material, however, there is a wide gap between theory and practice. This thesis investigates the reasons why by comparing the knowledge of practice, derived from four field studies at different times in different kinds of organisation, with the extensive literature on ergonomics that was available at the time.
The studies showed that levels of ergonomic knowledge and the priority given to ergonomic computer use were low, irrespective of location, but generally better in public - sector organisations. However, academic staff and post - graduate students reported least awareness of ergonomic principles, were least likely to have received training in ergonomics provided by their organisation, and experienced the highest proportion of physical health symptoms. Most workers did not know whether their organisation had written policies and procedures regarding the ergonomic use of computers. The majority believed ergonomic computer use was not given sufficient priority within their organisation and that they needed to spend more time in training on ergonomics. Most were satisfied with their job ; the work was interesting and there was a variation of tasks. The work environments were generally supportive and the people had adequate job control...
Trabecular bone, the porous bone found predominately in the spine and ends of long bones, is a mechanically regulated tissue. The hierarchy of bone consists of several levels of structure such as raw collagen and calcium phosphate on the microscale to trabecular packets, which are constantly being remodeled by bone cells on the tissue level. The remodeling of bone is believed to be explained through the concept of functional adaptation-where bone is a maximum strength yet minimum weight material. In functional adaptation, phenomenological models are able to predict the density distributions and bone shapes that are witnessed in vivo to a certain degree. Functional adaptation assumes there is an equilibrium state in which no changes in bone mass or structure will occur at the bony surface. Topological and mass changes are incurred on a local level when equilibrium is not achieved. The combination of these local changes produces a self-organized structure -meaning that the global bone shape is explained by simple local rules. Unfortunately, neither tissue engineering nor medical device design has incorporated the knowledge base of functional adaptation of bone into their orthopedic designs.
The objective of this dissertation work was to examine how the concept of functional adaptation could be applied to tissue engineering of bone in so much as it leads to the development of a computer-aided tissue engineering (CA TE) framework. The idea was to increase the specificity in which implant/scaffold architectural shape can be matched to tissue mechanical properties of the spine (or other locations)...
Working with Gillette Corporation, an automated mechanical testing tool that bent a small flat piece of steel was designed. The design of the tool was an effort to improve upon previous generations of the same tool. It consisted of three main elements; a servomotor, connected to a torque transducer, which was connected to a break device. A thin piece of steel was loaded into the break device and the motor was activated, moving a flipper arm on the device which bent the steel. While bending this piece of steel, the torque transducer would relay torque and angle information to a computer. This information was collected and displayed in Excel as torque versus angle plots, which would show the moment at which the piece of steel was broken. This entire process was automated so that after loading the steel, one click of a button would run one test. Razorblades were primarily bent with the device until they would break, and for this reason, the measuring tool was called the 'blade break test.' The work consisted of designing a robust mechanical system coupling the three devices mentioned above in series. Code was written in Visual Basic that managed all the individual devices in the measuring tool, getting them to work together and linking them with a computer.; (cont.) A user interface was designed with engineers in mind...
Over the past ten years, the development of more advanced computer systems and the growth in the use of the Internet have led to numerous changes in airline ticket distribution strategies. For example, the use of websites for booking and ticketing air travel continues to increase, and the Internet is often cited as the preferred model for a low-cost distribution channel. At the same time, Network Revenue Management methods are now viewed as a key tool for airlines to maximize revenue in an increasingly competitive marketplace. These new systems and tools have helped the airlines achieve record profits in the strong economy of the late 1990s, but these profits may have masked hidden costs of using the new technology. Examples of hidden costs include the added computational burden of increased search engine requests to the computer reservations system as well as the increased opportunity for automated systems to bypass the booking limits set by the revenue management system. Such costs have yet to be examined and quantified in an academic research effort. The purpose of this thesis research is to understand a variety of issues related to how the technologies of more advanced distribution channels and more sophisticated revenue management systems interact with each other and impact air travel providers.; (cont.) First...
Kernel-based machine learning algorithms are based on mapping data from the
original input feature space to a kernel feature space of higher dimensionality
to solve a linear problem in that space. Over the last decade, kernel based
classification and regression approaches such as support vector machines have
widely been used in remote sensing as well as in various civil engineering
applications. In spite of their better performance with different datasets,
support vector machines still suffer from shortcomings such as
visualization/interpretation of model, choice of kernel and kernel specific
parameter as well as the regularization parameter. Relevance vector machines
are another kernel based approach being explored for classification and
regression with in last few years. The advantages of the relevance vector
machines over the support vector machines is the availability of probabilistic
predictions, using arbitrary kernel functions and not requiring setting of the
regularization parameter. This paper presents a state-of-the-art review of SVM
and RVM in remote sensing and provides some details of their use in other civil
engineering application also.; Comment: 19 pages
We present results from the first geological field tests of the `Cyborg
Astrobiologist', which is a wearable computer and video camcorder system that
we are using to test and train a computer-vision system towards having some of
the autonomous decision-making capabilities of a field-geologist. The Cyborg
Astrobiologist platform has thus far been used for testing and development of
these algorithms and systems: robotic acquisition of quasi-mosaics of images,
real-time image segmentation, and real-time determination of interesting points
in the image mosaics. This work is more of a test of the whole system, rather
than of any one part of the system. However, beyond the concept of the system
itself, the uncommon map (despite its simplicity) is the main innovative part
of the system. The uncommon map helps to determine interest-points in a
context-free manner. Overall, the hardware and software systems function
reliably, and the computer-vision algorithms are adequate for the first field
tests. In addition to the proof-of-concept aspect of these field tests, the
main result of these field tests is the enumeration of those issues that we can
improve in the future, including: dealing with structural shadow and
microtexture, and also, controlling the camera's zoom lens in an intelligent
The domain of analysis and conception of Decisional Information System (DIS)
is, highly, applying new techniques and methods to succeed the process of the
decision and minimizing the time of conception. Our objective in this paper is
to define a group of patterns to ensure a systematic reuse of our approach to
analyse a DIS s business requirements. We seek, through this work, to guide the
discovery of an organizations business requirements, expressed as goals by
introducing the notion of context, to promote good processes design for a DIS,
to capitalize the process and models proposed in our approach and systematize
reuse steps of this approach to analyze similar projects or adapt them as
needed. The patterns are at the same time the process s patterns and product s
patterns as they capitalize models and their associated processes. These
patterns are represented according to the PSIGMA formalism.; Comment: paper accepted alson in SEDEXS'2012: International Conference On
Software Engineering, Databases and EXpert Systems, Settat, Maroc, 14-16 Juin
2012, International Journal of Computer Application (IJCA), Special Issue On
Software Engineering Databases and EXpert Systems (SEDEXS), Number 2, ISBN:
973-93-80870-26-8, 17 Septembre 2012
Organizing data into semantically more meaningful is one of the fundamental
modes of understanding and learning. Cluster analysis is a formal study of
methods for understanding and algorithm for learning. K-mean clustering
algorithm is one of the most fundamental and simple clustering algorithms. When
there is no prior knowledge about the distribution of data sets, K-mean is the
first choice for clustering with an initial number of clusters. In this paper a
novel distance metric called Design Specification (DS) distance measure
function is integrated with K-mean clustering algorithm to improve cluster
accuracy. The K-means algorithm with proposed distance measure maximizes the
cluster accuracy to 99.98% at P = 1.525, which is determined through the
iterative procedure. The performance of Design Specification (DS) distance
measure function with K - mean algorithm is compared with the performances of
other standard distance functions such as Euclidian, squared Euclidean, City
Block, and Chebshew similarity measures deployed with K-mean algorithm.The
proposed method is evaluated on the engineering materials database. The
experiments on cluster analysis and the outlier profiling show that these is an
excellent improvement in the performance of the proposed method.; Comment: International Journal of Computer Applications Vol.55(15)...
One may define a complex system as a system in which phenomena emerge as a
consequence of multiscale interaction among the system's components and their
environments. The field of Complex Systems is the study of such
systems--usually naturally occurring, either bio-logical or social. Systems
Engineering may be understood to include the conceptualising and building of
systems that consist of a large number of concurrently operating and
interacting components--usually including both human and non-human elements. It
has become increasingly apparent that the kinds of systems that systems
engineers build have many of the same multiscale characteristics as those of
naturally occurring complex systems. In other words, systems engineering is the
engineering of complex systems. This paper and the associated panel will
explore some of the connections between the fields of complex systems and
systems engineering.; Comment: 10 pages. Position paper to be presented at Conference on Systems
The use of World Web Wide for distance education has received increasing
attention over the past decades. The real challenge of adapting this technology
for engineering education and training is to facilitate the laboratory
experiments via Internet. In the sciences, measurement plays an important role.
The accuracy of the measurement, as well as the units, help scientists to
better understand phenomena occurring in nature. This paper introduces
Metrology educators to the use and adoption of Java-applets in order to create
virtual, online Metrology laboratories for students. These techniques have been
used to successfully form a laboratory course which augments the more
conventional lectures in concepts of Metrology course at Faculty of
Engineering, Albaha University, KSA. Improvements of the package are still
undergoing to incorporate Web-based technologies (Internet home page, HTML,
Java programming etc...). This Web-based education and training has been
successfully class-tested within an undergraduate preliminary year engineering
course and students reported a positive experience with its use. The use of
these labs should be self-explanatory and their reliable operation has been
Computer vision plays a major role in the robotics industry, where vision
data is frequently used for navigation and high-level decision making. Although
there is significant research in algorithms and functional requirements, there
is a comparative lack of emphasis on how best to map these abstract concepts
onto an appropriate software architecture.
In this study, we distinguish between the functional and non-functional
requirements of a computer vision system. Using a RoboCup humanoid robot system
as a case study, we propose and develop a software architecture that fulfills
the latter criteria.
The modifiability of the proposed architecture is demonstrated by detailing a
number of feature detection algorithms and emphasizing which aspects of the
underlying framework were modified to support their integration. To demonstrate
portability, we port our vision system (designed for an application-specific
DARwIn-OP humanoid robot) to a general-purpose, Raspberry Pi computer. We
evaluate performance on both platforms and compare them to a vision system
optimised for functional requirements only.
The architecture and implementation presented in this study provide a highly
generalisable framework for computer vision system design that is of particular
benefit in research and development...
Biology-derived algorithms are an important part of computational sciences,
which are essential to many scientific disciplines and engineering
applications. Many computational methods are derived from or based on the
analogy to natural evolution and biological activities, and these biologically
inspired computations include genetic algorithms, neural networks, cellular
automata, and other algorithms.
There is a hidden intrigue in the title. CT is one of the most abstract
mathematical disciplines, sometimes nicknamed "abstract nonsense". MDE is a
recent trend in software development, industrially supported by standards,
tools, and the status of a new "silver bullet". Surprisingly, categorical
patterns turn out to be directly applicable to mathematical modeling of
structures appearing in everyday MDE practice. Model merging, transformation,
synchronization, and other important model management scenarios can be seen as
executions of categorical specifications.
Moreover, the paper aims to elucidate a claim that relationships between CT
and MDE are more complex and richer than is normally assumed for "applied
mathematics". CT provides a toolbox of design patterns and structural
principles of real practical value for MDE. We will present examples of how an
elementary categorical arrangement of a model management scenario reveals
deficiencies in the architecture of modern tools automating the scenario.; Comment: In Proceedings ACCAT 2012, arXiv:1208.4301
Regenerated (FRF curves), synthesis of (FRF) curves there are two main
requirement in the form of response model, The first being that of regenerating
"Theoretical" curve for the frequency response function actually measured and
analysis and the second being that of synthesising the other functions which
were not measured,(FRF) that isolates the inherent dynamic properties of a
mechanical structure. Experimental modal parameters (frequency, damping, and
mode shape) are also obtained from a set of (FRF) measurements. The (FRF)
describes the input-output relationship between two points on a structure as a
function of frequency. Therefore, an (FRF) is actually defined between a single
input DOF (point & direction), and a single output (DOF), although the FRF was
previously defined as a ratio of the Fourier transforms of an output and input
signal. In this paper we detection FRF curve using Nyquist plot under
gyroscopic effect in revolving structure using computer smart office software.
Keywords - FRF curve; modal test; Nyquist plot; software engineering;
gyroscopic effect; smart office.; Comment: 8 pages, 11 figures, 1 picture
Modern computing systems are increasingly integrating both Phase Change Memory (PCM) and Flash memory technologies into computer systems being developed today, yet the lifetime of these technologies is limited by the number of times cells are written. Due to their limited lifetime, PCM and Flash may wear-out before other parts of the system. The objective of this dissertation is to increase the lifetime of memory locations composed of either PCM or Flash cells using coset coding.
For PCM, we extend memory lifetime by using coset coding to reduce the number of bit-flips per write compared to un-coded writes. Flash program/erase operation cycle degrades page lifetime; we extend the lifetime of Flash memory cells by using coset coding to re-program a page multiple times without erasing. We then show how coset coding can be integrated into Flash solid state drives.
We ran simulations to evaluate the effectiveness of using coset coding to extend PCM and Flash lifetime. We simulated writes to PCM and found that in our simulations coset coding can be used to increase PCM lifetime by up to 3x over writing un-coded data directly to the memory location. We extended the lifetime of Flash using coset coding to re-write pages without an intervening erase and were able to re-write a single Flash page using coset coding more times than when writing un-coded data or using prior coding work for the same area overhead. We also found in our simulations that using coset coding in a Flash SSD results in higher lifetime for a given area overhead compared to un-coded writes.