# A melhor ferramenta para a sua pesquisa, trabalho e TCC!

Página 1 dos resultados de 2413 itens digitais encontrados em 0.013 segundos

Resultados filtrados por Publicador: Universidade Cornell

## Computationally Efficient Bayesian Learning of Gaussian Process State Space Models

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 07/06/2015
Português

Relevância na Pesquisa

57.5128%

Gaussian processes allow for flexible specification of prior assumptions of
unknown dynamics in state space models. We present a procedure for efficient
Bayesian learning in Gaussian process state space models, where the
representation is formed by projecting the problem onto a set of approximate
eigenfunctions derived from the prior covariance structure. Learning under this
family of models can be conducted using a carefully crafted particle MCMC
algorithm. This scheme is computationally efficient and yet allows for a fully
Bayesian treatment of the problem. Compared to conventional system
identification tools or existing learning methods, we show competitive
performance and reliable quantification of uncertainties in the model.

Link permanente para citações:

## Compressed Sensing for Energy-Efficient Wireless Telemonitoring of Noninvasive Fetal ECG via Block Sparse Bayesian Learning

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

57.28901%

Fetal ECG (FECG) telemonitoring is an important branch in telemedicine. The
design of a telemonitoring system via a wireless body-area network with low
energy consumption for ambulatory use is highly desirable. As an emerging
technique, compressed sensing (CS) shows great promise in
compressing/reconstructing data with low energy consumption. However, due to
some specific characteristics of raw FECG recordings such as non-sparsity and
strong noise contamination, current CS algorithms generally fail in this
application.
This work proposes to use the block sparse Bayesian learning (BSBL) framework
to compress/reconstruct non-sparse raw FECG recordings. Experimental results
show that the framework can reconstruct the raw recordings with high quality.
Especially, the reconstruction does not destroy the interdependence relation
among the multichannel recordings. This ensures that the independent component
analysis decomposition of the reconstructed recordings has high fidelity.
Furthermore, the framework allows the use of a sparse binary sensing matrix
with much fewer nonzero entries to compress recordings. Particularly, each
column of the matrix can contain only two nonzero entries. This shows the
framework, compared to other algorithms such as current CS algorithms and
wavelet algorithms...

Link permanente para citações:

## Sparse Signal Recovery with Temporally Correlated Source Vectors Using Sparse Bayesian Learning

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

57.5128%

We address the sparse signal recovery problem in the context of multiple
measurement vectors (MMV) when elements in each nonzero row of the solution
matrix are temporally correlated. Existing algorithms do not consider such
temporal correlations and thus their performance degrades significantly with
the correlations. In this work, we propose a block sparse Bayesian learning
framework which models the temporal correlations. In this framework we derive
two sparse Bayesian learning (SBL) algorithms, which have superior recovery
performance compared to existing algorithms, especially in the presence of high
temporal correlations. Furthermore, our algorithms are better at handling
highly underdetermined problems and require less row-sparsity on the solution
matrix. We also provide analysis of the global and local minima of their cost
function, and show that the SBL cost function has the very desirable property
that the global minimum is at the sparsest solution to the MMV problem.
Extensive experiments also provide some interesting results that motivate
future theoretical research on the MMV model.; Comment: The final version with some typos corrected. Codes can be downloaded
at: http://dsp.ucsd.edu/~zhilin/TSBL_code.zip

Link permanente para citações:

## Tractable Bayesian Learning of Tree Belief Networks

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 16/01/2013
Português

Relevância na Pesquisa

57.5128%

#Computer Science - Learning#Computer Science - Artificial Intelligence#Statistics - Machine Learning

In this paper we present decomposable priors, a family of priors over
structure and parameters of tree belief nets for which Bayesian learning with
complete observations is tractable, in the sense that the posterior is also
decomposable and can be completely determined analytically in polynomial time.
This follows from two main results: First, we show that factored distributions
over spanning trees in a graph can be integrated in closed form. Second, we
examine priors over tree parameters and show that a set of assumptions similar
to (Heckerman and al. 1995) constrain the tree parameter priors to be a
compactly parameterized product of Dirichlet distributions. Beside allowing for
exact Bayesian learning, these results permit us to formulate a new class of
tractable latent variable models in which the likelihood of a data point is
computed through an ensemble average over tree structures.; Comment: Appears in Proceedings of the Sixteenth Conference on Uncertainty in
Artificial Intelligence (UAI2000)

Link permanente para citações:

## Spatiotemporal Sparse Bayesian Learning with Applications to Compressed Sensing of Multichannel Physiological Signals

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

57.5128%

Energy consumption is an important issue in continuous wireless
telemonitoring of physiological signals. Compressed sensing (CS) is a promising
framework to address it, due to its energy-efficient data compression
procedure. However, most CS algorithms have difficulty in data recovery due to
non-sparsity characteristic of many physiological signals. Block sparse
Bayesian learning (BSBL) is an effective approach to recover such signals with
satisfactory recovery quality. However, it is time-consuming in recovering
multichannel signals, since its computational load almost linearly increases
with the number of channels.
This work proposes a spatiotemporal sparse Bayesian learning algorithm to
recover multichannel signals simultaneously. It not only exploits temporal
correlation within each channel signal, but also exploits inter-channel
correlation among different channel signals. Furthermore, its computational
load is not significantly affected by the number of channels. The proposed
algorithm was applied to brain computer interface (BCI) and EEG-based driver's
drowsiness estimation. Results showed that the algorithm had both better
recovery performance and much higher speed than BSBL. Particularly, the
proposed algorithm ensured that the BCI classification and the drowsiness
estimation had little degradation even when data were compressed by 80%...

Link permanente para citações:

## Bayesian Learning in Undirected Graphical Models: Approximate MCMC algorithms

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 11/07/2012
Português

Relevância na Pesquisa

57.5128%

Bayesian learning in undirected graphical models|computing posterior
distributions over parameters and predictive quantities is exceptionally
difficult. We conjecture that for general undirected models, there are no
tractable MCMC (Markov Chain Monte Carlo) schemes giving the correct
equilibrium distribution over parameters. While this intractability, due to the
partition function, is familiar to those performing parameter optimisation,
Bayesian learning of posterior distributions over undirected model parameters
has been unexplored and poses novel challenges. we propose several approximate
MCMC schemes and test on fully observed binary models (Boltzmann machines) for
a small coronary heart disease data set and larger artificial systems. While
approximations must perform well on the model, their interaction with the
sampling scheme is also important. Samplers based on variational mean- field
approximations generally performed poorly, more advanced methods using loopy
propagation, brief sampling and stochastic dynamics lead to acceptable
parameter posteriors. Finally, we demonstrate these techniques on a Markov
random field with hidden variables.; Comment: Appears in Proceedings of the Twentieth Conference on Uncertainty in
Artificial Intelligence (UAI2004)

Link permanente para citações:

## Pattern-Coupled Sparse Bayesian Learning for Recovery of Block-Sparse Signals

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 09/11/2013
Português

Relevância na Pesquisa

57.5128%

We consider the problem of recovering block-sparse signals whose structures
are unknown \emph{a priori}. Block-sparse signals with nonzero coefficients
occurring in clusters arise naturally in many practical scenarios. However, the
knowledge of the block structure is usually unavailable in practice. In this
paper, we develop a new sparse Bayesian learning method for recovery of
block-sparse signals with unknown cluster patterns. Specifically, a
pattern-coupled hierarchical Gaussian prior model is introduced to characterize
the statistical dependencies among coefficients, in which a set of
hyperparameters are employed to control the sparsity of signal coefficients.
Unlike the conventional sparse Bayesian learning framework in which each
individual hyperparameter is associated independently with each coefficient, in
this paper, the prior for each coefficient not only involves its own
hyperparameter, but also the hyperparameters of its immediate neighbors. In
doing this way, the sparsity patterns of neighboring coefficients are related
to each other and the hierarchical model has the potential to encourage
structured-sparse solutions. The hyperparameters, along with the sparse signal,
are learned by maximizing their posterior probability via an
expectation-maximization (EM) algorithm. Numerical results show that the
proposed algorithm presents uniform superiority over other existing methods in
a series of experiments.

Link permanente para citações:

## Fast Marginalized Block Sparse Bayesian Learning Algorithm

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

57.28901%

The performance of sparse signal recovery from noise corrupted,
underdetermined measurements can be improved if both sparsity and correlation
structure of signals are exploited. One typical correlation structure is the
intra-block correlation in block sparse signals. To exploit this structure, a
framework, called block sparse Bayesian learning (BSBL), has been proposed
recently. Algorithms derived from this framework showed superior performance
but they are not very fast, which limits their applications. This work derives
an efficient algorithm from this framework, using a marginalized likelihood
maximization method. Compared to existing BSBL algorithms, it has close
recovery performance but is much faster. Therefore, it is more suitable for
large scale datasets and applications requiring real-time implementation.

Link permanente para citações:

## Impulsive Noise Mitigation in Powerline Communications Using Sparse Bayesian Learning

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 05/03/2013
Português

Relevância na Pesquisa

57.21097%

Additive asynchronous and cyclostationary impulsive noise limits
communication performance in OFDM powerline communication (PLC) systems.
Conventional OFDM receivers assume additive white Gaussian noise and hence
experience degradation in communication performance in impulsive noise.
Alternate designs assume a parametric statistical model of impulsive noise and
use the model parameters in mitigating impulsive noise. These receivers require
overhead in training and parameter estimation, and degrade due to model and
parameter mismatch, especially in highly dynamic environments. In this paper,
we model impulsive noise as a sparse vector in the time domain without any
other assumptions, and apply sparse Bayesian learning methods for estimation
and mitigation without training. We propose three iterative algorithms with
different complexity vs. performance trade-offs: (1) we utilize the noise
projection onto null and pilot tones to estimate and subtract the noise
impulses; (2) we add the information in the data tones to perform joint noise
estimation and OFDM detection; (3) we embed our algorithm into a decision
feedback structure to further enhance the performance of coded systems. When
compared to conventional OFDM PLC receivers, the proposed receivers achieve SNR
gains of up to 9 dB in coded and 10 dB in uncoded systems in the presence of
impulsive noise.; Comment: To appear in IEEE Journal on Selected Areas of Communications

Link permanente para citações:

## The Annealing Sparse Bayesian Learning Algorithm

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

57.459937%

In this paper we propose a two-level hierarchical Bayesian model and an
annealing schedule to re-enable the noise variance learning capability of the
fast marginalized Sparse Bayesian Learning Algorithms. The performance such as
NMSE and F-measure can be greatly improved due to the annealing technique. This
algorithm tends to produce the most sparse solution under moderate SNR
scenarios and can outperform most concurrent SBL algorithms while pertains
small computational load.; Comment: The update equation in the annealing process was too empirical for
practical usage. This paper need to be revised in order to be printed on the
arxiv.org

Link permanente para citações:

## Bayesian Learning for Low-Rank matrix reconstruction

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 23/01/2015
Português

Relevância na Pesquisa

57.28901%

We develop latent variable models for Bayesian learning based low-rank matrix
completion and reconstruction from linear measurements. For under-determined
systems, the developed methods are shown to reconstruct low-rank matrices when
neither the rank nor the noise power is known a-priori. We derive relations
between the latent variable models and several low-rank promoting penalty
functions. The relations justify the use of Kronecker structured covariance
matrices in a Gaussian based prior. In the methods, we use evidence
approximation and expectation-maximization to learn the model parameters. The
performance of the methods is evaluated through extensive numerical
simulations.; Comment: Submitted to IEEE Transactions on Signal Processing

Link permanente para citações:

## Sparse Bayesian Learning for EEG Source Localization

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 19/01/2015
Português

Relevância na Pesquisa

57.21097%

#Quantitative Biology - Quantitative Methods#Computer Science - Learning#Quantitative Biology - Neurons and Cognition

Purpose: Localizing the sources of electrical activity from
electroencephalographic (EEG) data has gained considerable attention over the
last few years. In this paper, we propose an innovative source localization
method for EEG, based on Sparse Bayesian Learning (SBL). Methods: To better
specify the sparsity profile and to ensure efficient source localization, the
proposed approach considers grouping of the electrical current dipoles inside
human brain. SBL is used to solve the localization problem in addition with
imposed constraint that the electric current dipoles associated with the brain
activity are isotropic. Results: Numerical experiments are conducted on a
realistic head model that is obtained by segmentation of MRI images of the head
and includes four major components, namely the scalp, the skull, the
cerebrospinal fluid (CSF) and the brain, with appropriate relative conductivity
values. The results demonstrate that the isotropy constraint significantly
improves the performance of SBL. In a noiseless environment, the proposed
method was 1 found to accurately (with accuracy of >75%) locate up to 6
simultaneously active sources, whereas for SBL without the isotropy constraint,
the accuracy of finding just 3 simultaneously active sources was <75%.
Conclusions: Compared to the state-of-the-art algorithms...

Link permanente para citações:

## Hierarchical sparse Bayesian learning: theory and application for inferring structural damage from incomplete modal data

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 21/03/2015
Português

Relevância na Pesquisa

57.740703%

Structural damage due to excessive loading or environmental degradation
typically occurs in localized areas in the absence of collapse. This prior
information about the spatial sparseness of structural damage is exploited here
by a hierarchical sparse Bayesian learning framework with the goal of reducing
the source of ill-conditioning in the stiffness loss inversion problem for
damage detection. Sparse Bayesian learning methodologies automatically prune
away irrelevant or inactive features from a set of potential candidates, and so
they are effective probabilistic tools for producing sparse explanatory
subsets. We have previously proposed such an approach to establish the
probability of localized stiffness reductions that serve as a proxy for damage
by using noisy incomplete modal data from before and after possible damage. The
core idea centers on a specific hierarchical Bayesian model that promotes
spatial sparseness in the inferred stiffness reductions in a way that is
consistent with the Bayesian Ockham razor. In this paper, we improve the theory
of our previously proposed sparse Bayesian learning approach by eliminating an
approximation and, more importantly, incorporating a constraint on stiffness
increases. Our approach has many appealing features that are summarized at the
end of the paper. We validate the approach by applying it to the Phase II
simulated and experimental benchmark studies sponsored by the IASC-ASCE Task
Group on Structural Health Monitoring. The results show that it can reliably
detect...

Link permanente para citações:

## Cramer Rao-Type Bounds for Sparse Bayesian Learning

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

57.459937%

In this paper, we derive Hybrid, Bayesian and Marginalized Cram\'{e}r-Rao
lower bounds (HCRB, BCRB and MCRB) for the single and multiple measurement
vector Sparse Bayesian Learning (SBL) problem of estimating compressible
vectors and their prior distribution parameters. We assume the unknown vector
to be drawn from a compressible Student-t prior distribution. We derive CRBs
that encompass the deterministic or random nature of the unknown parameters of
the prior distribution and the regression noise variance. We extend the MCRB to
the case where the compressible vector is distributed according to a general
compressible prior distribution, of which the generalized Pareto distribution
is a special case. We use the derived bounds to uncover the relationship
between the compressibility and Mean Square Error (MSE) in the estimates.
Further, we illustrate the tightness and utility of the bounds through
simulations, by comparing them with the MSE performance of two popular
SBL-based estimators. It is found that the MCRB is generally the tightest among
the bounds derived and that the MSE performance of the Expectation-Maximization
(EM) algorithm coincides with the MCRB for the compressible vector. Through
simulations, we demonstrate the dependence of the MSE performance of SBL based
estimators on the compressibility of the vector for several values of the
number of observations and at different signal powers.; Comment: Accepted for publication in the IEEE Transactions on Signal
Processing...

Link permanente para citações:

## Big Learning with Bayesian Methods

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 24/11/2014
Português

Relevância na Pesquisa

47.88741%

#Computer Science - Learning#Statistics - Applications#Statistics - Computation#Statistics - Methodology#Statistics - Machine Learning#F.1.2#G.3

Explosive growth in data and availability of cheap computing resources have
sparked increasing interest in Big learning, an emerging subfield that studies
scalable machine learning algorithms, systems, and applications with Big Data.
Bayesian methods represent one important class of statistic methods for machine
learning, with substantial recent developments on adaptive, flexible and
scalable Bayesian learning. This article provides a survey of the recent
advances in Big learning with Bayesian methods, termed Big Bayesian Learning,
including nonparametric Bayesian methods for adaptively inferring model
complexity, regularized Bayesian inference for improving the flexibility via
posterior regularization, and scalable algorithms and systems based on
stochastic subsampling and distributed computing for dealing with large-scale
applications.; Comment: 21 pages, 6 figures

Link permanente para citações: