# A melhor ferramenta para a sua pesquisa, trabalho e TCC!

Página 1 dos resultados de 6749 itens digitais encontrados em 0.013 segundos

## Técnicas de computação paralela aplicadas ao método das características em sistemas hidráulicos = : Parallel computing applied to method of characteristics in hydraulic systems; Parallel computing applied to method of characteristics in hydraulic systems

Fonte: Biblioteca Digital da Unicamp
Publicador: Biblioteca Digital da Unicamp

Tipo: Dissertação de Mestrado
Formato: application/pdf

Publicado em 22/02/2013
Português

Relevância na Pesquisa

580.40145%

#Programação paralela (Computação)#Computadores paralelos#Simulação (Computadores) - Dinâmica dos fluídos#Simulação (Computadores) - Modelos matemáticos#Parallel programming (Computing)#Parallel computers#Simulation (computers)#Simulation (computers)

Uma instalação hidráulica é um conjunto de dispositivos hidromecânicos e tubos com a função de transportar um fluido. O controle do escoamento deste fluido ocorre por meio de manobras nos dispositivos hidromecânicos. Uma investigação sobre o impacto das manobras destes dispositivos em uma instalação hidráulica pode evitar danos físicos ao sistema (como rompimento de tubos, por exemplo). Uma das formas de se investigar o efeito destas manobras é por meio da simulação. A simulação permite estudar um sistema hidráulico, que após uma manobra hidráulica sai de uma situação contínua (regime permanente inicial), entra em um estado transitório (regime transiente) para posteriormente entrar em uma nova situação contínua (regime permanente final). No regime de transiente hidráulico são formadas ondas de sobrepressão e subpressão internas na tubulação e que podem levar a danos. Um dos métodos mais aceitos para simulações de transiente hidráulico é o método das características, que permite transformar as equações diferenciais parciais que descrevem o fenômeno em um conjunto de equações diferenciais ordinárias. Dependendo do tamanho do sistema hidráulico (número e comprimento de tubos, número de dispositivos eletromecânicos...

Link permanente para citações:

## Parc#: parallel computing with c# in .net

Fonte: Springer Berlin
Publicador: Springer Berlin

Tipo: Artigo de Revista Científica

Publicado em //2005
Português

Relevância na Pesquisa

680.4014%

This paper describes experiments with the development of a parallel computing platform on top of a compatible C# implementation: the Mono project. This implementation has the advantage of running on both Windows and UNIX platforms and has reached a stable state. This paper presents performance results obtained and compares these results with implementations in Java/RMI. The results show that the Mono network performance, critical for parallel applications, has greatly improved in recent releases, that it is superior to the Java RMI and is close to the performance of the new Java nio package. The Mono virtual machine is not yet so highly tuned as the Sun JVM and Thread scheduling needs to be improved. Overall, this platform is a new alternative to explore in the future for parallel computing

Link permanente para citações:

## MATLAB*P 2.0: A unified parallel MATLAB

Fonte: MIT - Massachusetts Institute of Technology
Publicador: MIT - Massachusetts Institute of Technology

Tipo: Artigo de Revista Científica
Formato: 423708 bytes; application/pdf

Português

Relevância na Pesquisa

584.5554%

MATLAB is one of the most widely used mathematical computing environments in technical computing. It is an interactive environment that provides high performance computational routines and an easy-to-use, C-like scripting language. Mathworks, the company that develops MATLAB, currently does not provide a version of MATLAB that can utilize parallel computing. This has led to academic and commercial efforts outside Mathworks to build a parallel MATLAB, using a variety of approaches. In a survey, 26 parallel MATLAB projects utilizing four different approaches have been identified. MATLAB*P is one of the 26 projects. It makes use of the backend support approach. This approach provides parallelism to MATLAB programs by relaying MATLAB commands to a parallel backend. The main difference between MATLAB*P and other projects that make use of the same approach is in its focus. MATLAB*P aims to provide a user-friendly supercomputing environment in which parallelism is achieved transparently through the use of objected oriented programming features in MATLAB. One advantage of this approach is that existing scripts can be run in parallel with no or minimal modifications. This paper describes MATLAB*P 2.0, which is a complete rewrite of MATLAB*P. This new version brings together the backend support approach with embarrassingly parallel and MPI approaches to provide the first complete parallel MATLAB framework.; Singapore-MIT Alliance (SMA)

Link permanente para citações:

## Solving Multiple Classes of Problems in Parallel with MATLAB*P

Fonte: MIT - Massachusetts Institute of Technology
Publicador: MIT - Massachusetts Institute of Technology

Tipo: Artigo de Revista Científica
Formato: 192920 bytes; application/pdf

Português

Relevância na Pesquisa

584.6853%

MATLAB [7] is one of the most widely used mathematical computing environments in technical computing. It is an interactive environment that provides high performance computational routines and an easy-to-use, C-like scripting language. Mathworks, the company that develops MATLAB, currently does not provide a version of MATLAB that can utilize parallel computing [9]. This has led to academic and commercial efforts outside Mathworks to build a parallel MATLAB, using a variety of approaches. MATLAB*P is a parallel MATLAB that focus on enhancing productivity by providing an easy to use parallel computing tool. Using syntaxes identical to regular MATLAB, it can be used to solve large scale algebraic problems as well as multiple small problems in parallel. This paper describes how the innovative combination of ’*p mode’ and ’MultiMATLAB/MultiOctave mode’ in MATLAB*P can be used to solve a large range of real world problems.; Singapore-MIT Alliance (SMA)

Link permanente para citações:

## Distributed frameworks and parallel algorithms for processing large-scale geographic data

Fonte: Elsevier Science BV
Publicador: Elsevier Science BV

Tipo: Artigo de Revista Científica

Publicado em //2003
Português

Relevância na Pesquisa

588.8658%

#Parallel computing#Distributed computing#Grid computing#Metacomputing#Geographic information systems

The number of applications that require parallel and high-performance computing techniques has diminished in recent years due to to the continuing increase in power of PC, workstation and mono-processor systems. However, Geographic information systems (GIS) still provide a resource-hungry application domain that can make good use of parallel techniques. We describe our work with geographical systems for environmental and defence applications and some of the algorithms and techniques we have deployed to deliver high-performance prototype systems that can deal with large data sets. GIS applications are often run operationally as part of decision support systems with both a human interactive component as well as large scale batch or server-based components. Parallel computing technology embedded in a distributed system therefore provides an ideal and practical solution for multi-site organisations and especially government agencies who need to extract the best value from bulk geographic data. We describe the distributed computing approaches we have used to integrate bulk data and metadata sources and the grid computing techniques we have used to embed parallel services in an operational infrastructure. We describe some of the parallel techniques we have used: for data assimilation; for image and map data processing; for data cluster analysis; and for data mining. We also discuss issues related to emerging standards for data exchange and design issues for integrating together data in a distributed ownership system. We include a historical review of our work in this area over the last decade which leads us to believe parallel computing will continue to play an important role in GIS. We speculate on algorithmic and systems issues for the future.; http://www.elsevier.com/wps/find/journaldescription.cws_home/505617/description#description; Kenneth A. Hawick...

Link permanente para citações:

## Results of 2013 Survey of Parallel Computing Needs Focusing on NSF-funded Researchers

Fonte: Universidade de Indiana
Publicador: Universidade de Indiana

Tipo: Relatório

Português

Relevância na Pesquisa

578.7201%

The field of supercomputing is experiencing a rapid change in system structure, programming models, and software environments in response to advances in application requirements and in underlying enabling technologies. Traditional parallel programming approaches have relied on static resource allocation and task scheduling through programming interfaces such as MPI and OpenMP. These methods are reaching their efficiency and scalability limits on the new emerging classes of systems, spurring the creation of innovative dynamic strategies and software tools, including advanced runtime system software and programming interfaces that use them. To accelerate adoption of these next-generation methods, Indiana University is investigating the creation of a single supported Reconfigurable Execution Framework Testbed (REFT) to be used by parallel application algorithm developers as well as researchers in advanced tools for parallel computing. These investigations are funded by the National Science Foundation Award Number 1205518 to Indiana University with Thomas Sterling as Principal Investigator, and Maciej Brodowicz, Matthew R. Link, Andrew Lumsdaine, and Craig Stewart as Co-Principal Investigators. As a starting point in this research we proposed to assess needs in parallel computing in general and needs for software tools and testbeds in particular within the NSF-funded research community. As one set of data toward understanding these needs...

Link permanente para citações:

## "Use of IU parallel computing resources and high performance file systems - July 2013 to Dec 2014."

Fonte: Universidade de Indiana
Publicador: Universidade de Indiana

Tipo: Relatório

Português

Relevância na Pesquisa

672.9969%

This report details use of IU's parallel computing resources and high performance file systems from July 2013 through December 2014.

Link permanente para citações:

## A Feasible Graph Partition Framework for Random Walks Implemented by Parallel Computing in Big Graph

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 30/12/2014
Português

Relevância na Pesquisa

578.61203%

#Computer Science - Social and Information Networks#Computer Science - Distributed, Parallel, and Cluster Computing#Physics - Physics and Society

Graph partition is a fundamental problem of parallel computing for big graph
data. Many graph partition algorithms have been proposed to solve the problem
in various applications, such as matrix computations and PageRank, etc., but
none has pay attention to random walks. Random walks is a widely used method to
explore graph structure in lots of fields. The challenges of graph partition
for random walks include the large number of times of communication between
partitions, lots of replications of the vertices, unbalanced partition, etc. In
this paper, we propose a feasible graph partition framework for random walks
implemented by parallel computing in big graph. The framework is based on two
optimization functions to reduce the bandwidth, memory and storage cost in the
condition that the load balance is guaranteed. In this framework, several
greedy graph partition algorithms are proposed. We also propose five metrics
from different perspectives to evaluate the performance of these algorithms. By
running the algorithms on the big graph data set of real world, the
experimental results show that these algorithms in the framework are capable of
solving the problem of graph partition for random walks for different needs,
e.g. the best result is improved more than 70 times in reducing the times of
communication.

Link permanente para citações:

## Analysis of GPU Parallel Computing based on Matlab

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 25/05/2015
Português

Relevância na Pesquisa

583.1813%

Matlab is very widely used in scientific computing, but Matlab computational
efficiency is lower than C language program. In order to improve the computing
speed, some toolbox can use GPU to accelerate the computation. This paper
describes GPU working principle, our experiments and results analysis of
parallel computing by using GPU based on Matlab. Experimental results show that
for parallel operations, GPU computing speed is faster than CPU, for the
logical instructions, GPU computing speed is slower than CPU.

Link permanente para citações:

## A Survey on Reproducibility in Parallel Computing

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 13/11/2015
Português

Relevância na Pesquisa

578.61203%

We summarize the results of a survey on reproducibility in parallel
computing, which was conducted during the Euro-Par conference in August 2015.
The survey form was handed out to all participants of the conference and the
workshops. The questionnaire, which specifically targeted the parallel
computing community, contained questions in four different categories: general
questions on reproducibility, the current state of reproducibility, the
reproducibility of the participants' own papers, and questions about the
participants' familiarity with tools, software, or open-source software
licenses used for reproducible research.; Comment: 15 pages, 24 figures

Link permanente para citações:

## Towards Parallel Computing on the Internet: Applications, Architectures, Models and Programming Tools

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

584.6853%

The development of Internet wide resources for general purpose parallel
computing poses the challenging task of matching computation and communication
complexity. A number of parallel computing models exist that address this for
traditional parallel architectures, and there are a number of emerging models
that attempt to do this for large scale Internet-based systems like
computational grids. In this survey we cover the three fundamental aspects --
application, architecture and model, and we show how they have been developed
over the last decade. We also cover programming tools that are currently being
used for parallel programming in computational grids. The trend in conventional
computational models are to put emphasis on efficient communication between
participating nodes by adapting different types of communication to network
conditions. Effects of dynamism and uncertainties that arise in large scale
systems are evidently important to understand and yet there is currently little
work that addresses this from a parallel computing perspective.; Comment: 39 pages, 9 figures

Link permanente para citações:

## Efficient Ranking and Selection in Parallel Computing Environments

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 16/06/2015
Português

Relevância na Pesquisa

584.0504%

#Computer Science - Distributed, Parallel, and Cluster Computing#Mathematics - Optimization and Control

The goal of ranking and selection (R&S) procedures is to identify the best
stochastic system from among a finite set of competing alternatives. Such
procedures require constructing estimates of each system's performance, which
can be obtained simultaneously by running multiple independent replications on
a parallel computing platform. However, nontrivial statistical and
implementation issues arise when designing R&S procedures for a parallel
computing environment. Thus we propose several design principles for parallel
R&S procedures that preserve statistical validity and maximize core
utilization, especially when large numbers of alternatives or cores are
involved. These principles are followed closely by our parallel Good Selection
Procedure (GSP), which, under the assumption of normally distributed output,
(i) guarantees to select a system in the indifference zone with high
probability, (ii) runs efficiently on up to 1,024 parallel cores, and (iii) in
an example uses smaller sample sizes compared to existing parallel procedures,
particularly for large problems (over $10^6$ alternatives). In our
computational study we discuss two methods for implementing GSP on parallel
computers, namely the Message-Passing Interface (MPI) and Hadoop MapReduce and
show that the latter provides good protection against core failures at the
expense of a significant drop in utilization due to periodic unavoidable
synchronization.

Link permanente para citações:

## Parallel Computing Environments and Methods for Power Distribution System Simulation

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 18/09/2004
Português

Relevância na Pesquisa

582.41582%

#Computer Science - Distributed, Parallel, and Cluster Computing#Computer Science - Computational Engineering, Finance, and Science#Computer Science - Multiagent Systems#Computer Science - Performance

The development of cost-effective highperformance parallel computing on
multi-processor supercomputers makes it attractive to port excessively time
consuming simulation software from personal computers (PC) to super computes.
The power distribution system simulator (PDSS) takes a bottom-up approach and
simulates load at the appliance level, where detailed thermal models for
appliances are used. This approach works well for a small power distribution
system consisting of a few thousand appliances. When the number of appliances
increases, the simulation uses up the PC memory and its runtime increases to a
point where the approach is no longer feasible to model a practical large power
distribution system. This paper presents an effort made to port a PC-based
power distribution system simulator to a 128-processor shared-memory
supercomputer. The paper offers an overview of the parallel computing
environment and a description of the modification made to the PDSS model. The
performance of the PDSS running on a standalone PC and on the supercomputer is
compared. Future research direction of utilizing parallel computing in the
power distribution system simulation is also addressed.; Comment: 7 pages, 4 figures, 6 tables, submitted to HICSS-38

Link permanente para citações:

## Genetic Algorithm Modeling with GPU Parallel Computing Technology

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 23/11/2012
Português

Relevância na Pesquisa

580.62645%

#Astrophysics - Instrumentation and Methods for Astrophysics#Computer Science - Distributed, Parallel, and Cluster Computing#Computer Science - Neural and Evolutionary Computing

We present a multi-purpose genetic algorithm, designed and implemented with
GPGPU / CUDA parallel computing technology. The model was derived from a
multi-core CPU serial implementation, named GAME, already scientifically
successfully tested and validated on astrophysical massive data classification
problems, through a web application resource (DAMEWARE), specialized in data
mining based on Machine Learning paradigms. Since genetic algorithms are
inherently parallel, the GPGPU computing paradigm has provided an exploit of
the internal training features of the model, permitting a strong optimization
in terms of processing performances and scalability.; Comment: 11 pages, 2 figures, refereed proceedings; Neural Nets and
Surroundings, Proceedings of 22nd Italian Workshop on Neural Nets, WIRN 2012;
Smart Innovation, Systems and Technologies, Vol. 19, Springer

Link permanente para citações:

## Asynchronous Parallel Computing Algorithm implemented in 1D Heat Equation with CUDA

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

585.16305%

In this note, we present the stability as well as performance analysis of
asynchronous parallel computing algorithm implemented in 1D heat equation with
CUDA. The primary objective of this note lies in dissemination of asynchronous
parallel computing algorithm by providing CUDA code for fast and easy
implementation. We show that the simulations carried out on nVIDIA GPU device
with asynchronous scheme outperforms synchronous parallel computing algorithm.
In addition, we also discuss some drawbacks of asynchronous parallel computing
algorithms.; Comment: arXiv admin note: text overlap with arXiv:1503.03952

Link permanente para citações:

## Can Agent Intelligence be used to Achieve Fault Tolerant Parallel Computing Systems?

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 13/08/2013
Português

Relevância na Pesquisa

581.6938%

#Computer Science - Distributed, Parallel, and Cluster Computing#Computer Science - Multiagent Systems

The work reported in this paper is motivated towards validating an
alternative approach for fault tolerance over traditional methods like
checkpointing that constrain efficacious fault tolerance. Can agent
intelligence be used to achieve fault tolerant parallel computing systems? If
so, "What agent capabilities are required for fault tolerance?", "What parallel
computational tasks can benefit from such agent capabilities?" and "How can
agent capabilities be implemented for fault tolerance?" need to be addressed.
Cognitive capabilities essential for achieving fault tolerance through agents
are considered. Parallel reduction algorithms are identified as a class of
algorithms that can benefit from cognitive agent capabilities. The Message
Passing Interface is utilized for implementing an intelligent agent based
approach. Preliminary results obtained from the experiments validate the
feasibility of an agent based approach for achieving fault tolerance in
parallel computing systems.

Link permanente para citações:

## On the State and Importance of Reproducible Experimental Research in Parallel Computing

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 16/08/2013
Português

Relevância na Pesquisa

586.14016%

Computer science is also an experimental science. This is particularly the
case for parallel computing, which is in a total state of flux, and where
experiments are necessary to substantiate, complement, and challenge
theoretical modeling and analysis. Here, experimental work is as important as
are advances in theory, that are indeed often driven by the experimental
findings. In parallel computing, scientific contributions presented in research
articles are therefore often based on experimental data, with a substantial
part devoted to presenting and discussing the experimental findings. As in all
of experimental science, experiments must be presented in a way that makes
reproduction by other researchers possible, in principle. Despite appearance to
the contrary, we contend that reproducibility plays a small role, and is
typically not achieved. As can be found, articles often do not have a
sufficiently detailed description of their experiments, and do not make
available the software used to obtain the claimed results. As a consequence,
parallel computational results are most often impossible to reproduce, often
questionable, and therefore of little or no scientific value. We believe that
the description of how to reproduce findings should play an important part in
every serious...

Link permanente para citações:

## Survey of Parallel Computing with MATLAB

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 25/07/2014
Português

Relevância na Pesquisa

586.26305%

Matlab is one of the most widely used mathematical computing environments in
technical computing. It has an interactive environment which provides high
performance computing (HPC) procedures and easy to use. Parallel computing with
Matlab has been an interested area for scientists of parallel computing
researches for a number of years. Where there are many attempts to parallel
Matlab. In this paper, we present most of the past,present attempts of parallel
Matlab such as MatlabMPI, bcMPI, pMatlab, Star-P and PCT. Finally, we expect
the future attempts.; Comment: 9 pages, 11 figures

Link permanente para citações:

## Course: Parallel Computing II

Fonte: Faculty Learning Community
Publicador: Faculty Learning Community

Tipo: Newsletter

Português

Relevância na Pesquisa

672.9969%

Faculty member, Department of Computer Science; This portfolio contains; Schaller's table of contents, teaching philosophy, Parallel computing II course syllabus, outcomes, solution, and reflections.

Link permanente para citações:

## Parallel Computing Applied to Satellite Images Processing for Solar Resource Estimates

Fonte: CLEI Electronic Journal
Publicador: CLEI Electronic Journal

Tipo: Artigo de Revista Científica
Formato: text/html

Publicado em 01/12/2012
Português

Relevância na Pesquisa

678.612%

This article presents the application of parallel computing techniques to process satellite imagery information for solar resource estimates. A distributed memory parallel algorithm is introduced, which is capable to generate the required inputs from visible channel images to feed a statistical solar irradiation model. The parallelization strategy consists in distributing the images within the available processors, and so, every image is accessed only by one process. The experimental analysis demonstrate that a maximum speedup value of 2.32 is achieved when using four computing resources, but beyond that point the performance rather decrease due to hard-disk input/output velocity.

Link permanente para citações: