Página 2 dos resultados de 2413 itens digitais encontrados em 0.007 segundos

## Bayesian learning of visual chunks by human observers

Orbán, Gergő; Fiser, József; Aslin, Richard N.; Lengyel, Máté
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
47.65616%
Efficient and versatile processing of any hierarchically structured information requires a learning mechanism that combines lower-level features into higher-level chunks. We investigated this chunking mechanism in humans with a visual pattern-learning paradigm. We developed an ideal learner based on Bayesian model comparison that extracts and stores only those chunks of information that are minimally sufficient to encode a set of visual scenes. Our ideal Bayesian chunk learner not only reproduced the results of a large set of previous empirical findings in the domain of human pattern learning but also made a key prediction that we confirmed experimentally. In accordance with Bayesian learning but contrary to associative learning, human performance was well above chance when pair-wise statistics in the exemplars contained no relevant information. Thus, humans extract chunks from complex visual patterns by generating accurate yet economical representations and not by encoding the full correlational structure of the input.

## Sparse Bayesian Learning for Identifying Imaging Biomarkers in AD Prediction

Shen, Li; Qi, Yuan; Kim, Sungeun; Nho, Kwangsik; Wan, Jing; Risacher, Shannon L.; Saykin, Andrew J.;
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
47.360273%
We apply sparse Bayesian learning methods, automatic relevance determination (ARD) and predictive ARD (PARD), to Alzheimer’s disease (AD) classification to make accurate prediction and identify critical imaging markers relevant to AD at the same time. ARD is one of the most successful Bayesian feature selection methods. PARD is a powerful Bayesian feature selection method, and provides sparse models that is easy to interpret. PARD selects the model with the best estimate of the predictive performance instead of choosing the one with the largest marginal model likelihood. Comparative study with support vector machine (SVM) shows that ARD/PARD in general outperform SVM in terms of prediction accuracy. Additional comparison with surface-based general linear model (GLM) analysis shows that regions with strongest signals are identified by both GLM and ARD/PARD. While GLM P-map returns significant regions all over the cortex, ARD/PARD provide a small number of relevant and meaningful imaging markers with predictive power, including both cortical and subcortical measures.

## Reconstructing constructivism: Causal models, Bayesian learning mechanisms and the theory theory

Gopnik, Alison; Wellman, Henry M.
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
47.38005%
We propose a new version of the “theory theory” grounded in the computational framework of probabilistic causal models and Bayesian learning. Probabilistic models allow a constructivist but rigorous and detailed approach to cognitive development. They also explain the learning of both more specific causal hypotheses and more abstract framework theories. We outline the new theoretical ideas, explain the computational framework in an intuitive and non-technical way, and review an extensive but relatively recent body of empirical results that supports these ideas. These include new studies of the mechanisms of learning. Children infer causal structure from statistical information, through their own actions on the world and through observations of the actions of others. Studies demonstrate these learning mechanisms in children from 16 months to 4 years old and include research on causal statistical learning, informal experimentation through play, and imitation and informal pedagogy. They also include studies of the variability and progressive character of intuitive theory change, particularly theory of mind. These studies investigate both the physical and psychological and social domains. We conclude with suggestions for further collaborative projects between developmental and computational cognitive scientists.

## Bayesian learning and the psychology of rule induction

Endress, Ansgar D.
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
47.636377%
In recent years, Bayesian learning models have been applied to an increasing variety of domains. While such models have been criticized on theoretical grounds, the underlying assumptions and predictions are rarely made concrete and tested experimentally. Here, I use Frank and Tenenbaum's (2011) Bayesian model of rule-learning as a case study to spell out the underlying assumptions, and to confront them with the empirical results Frank and Tenenbaum (2011) propose to simulate, as well as with novel experiments. While rule-learning is arguably well suited to rational Bayesian approaches, I show that their models are neither psychologically plausible nor ideal observer models. Further, I show that their central assumption is unfounded: humans do not always preferentially learn more specific rules, but, at least in some situations, those rules that happen to be more salient. Even when granting the unsupported assumptions, I show that all of the experiments modeled by Frank and Tenenbaum (2011) either contradict their models, or have a large number of more plausible interpretations. I provide an alternative account of the experimental data based on simple psychological mechanisms, and show that this account both describes the data better...

## An evaluation of factors influencing Bayesian learning systems.

Eisenstein, E L; Alemi, F
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
47.55834%
OBJECTIVES: To examine the influences of situational and model factors on the accuracy of Bayesian learning systems. DESIGN: This study examines the impacts of variations in two situational factors, training sample size and number of attributes, and in two model factors, choice of Bayesian model and criteria for excluding model attributes, on the overall accuracy of Bayesian learning systems. MEASUREMENTS: The test data were derived from myocardial infarction patients who were admitted to eight hospitals in New Orleans during 1985. The test sample consisted of 339 cases; the training samples included 100, 400, and 800 cases. APACHE II variables were used for the model attributes and patient discharge status as the outcome predicted. Attribute sets were selected in sizes of 4, 8, and 12. The authors varied the Bayesian models (proper and simple) and the attribute exclusion criteria (optimism and pessimism). RESULTS: The simple Bayes model, which assumes conditional independence, consistently equalled or outperformed the proper (maximally dependent) Bayes model, which assumes conditional dependence, across all training sample and attribute set sizes. Not excluding model attributes was found to be preferable to using sample theory as an attribute exclusion criterion in both the simple and the proper models. CONCLUSION: In the domain tested...

## Non-parametric Bayesian Learning with Deep Learning Structure and Its Applications in Wireless Networks

Pan, Erte; Han, Zhu
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
47.28901%
In this paper, we present an infinite hierarchical non-parametric Bayesian model to extract the hidden factors over observed data, where the number of hidden factors for each layer is unknown and can be potentially infinite. Moreover, the number of layers can also be infinite. We construct the model structure that allows continuous values for the hidden factors and weights, which makes the model suitable for various applications. We use the Metropolis-Hastings method to infer the model structure. Then the performance of the algorithm is evaluated by the experiments. Simulation results show that the model fits the underlying structure of simulated data.; Comment: 5 pages, 2 figures and 1 algorithm list

## Prior Support Knowledge-Aided Sparse Bayesian Learning with Partly Erroneous Support Information

Fang, Jun; Shen, Yanning; Li, Fuwei; Li, Hongbin
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
47.3819%
It has been shown both experimentally and theoretically that sparse signal recovery can be significantly improved given that part of the signal's support is known \emph{a priori}. In practice, however, such prior knowledge is usually inaccurate and contains errors. Using such knowledge may result in severe performance degradation or even recovery failure. In this paper, we study the problem of sparse signal recovery when partial but partly erroneous prior knowledge of the signal's support is available. Based on the conventional sparse Bayesian learning framework, we propose a modified two-layer Gaussian-inverse Gamma hierarchical prior model and, moreover, an improved three-layer hierarchical prior model. The modified two-layer model employs an individual parameter $b_i$ for each sparsity-controlling hyperparameter $\alpha_i$, and has the ability to place non-sparsity-encouraging priors to those coefficients that are believed in the support set. The three-layer hierarchical model is built on the modified two-layer prior model, with a prior placed on the parameters $\{b_i\}$ in the third layer. Such a model enables to automatically learn the true support from partly erroneous information through learning the values of the parameters $\{b_i\}$. Variational Bayesian algorithms are developed based on the proposed hierarchical prior models. Numerical results are provided to illustrate the performance of the proposed algorithms.

## PAC-Bayesian Learning and Domain Adaptation

Germain, Pascal; Habrard, Amaury; Laviolette, François; Morvant, Emilie
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
47.68924%
In machine learning, Domain Adaptation (DA) arises when the distribution gen- erating the test (target) data differs from the one generating the learning (source) data. It is well known that DA is an hard task even under strong assumptions, among which the covariate-shift where the source and target distributions diverge only in their marginals, i.e. they have the same labeling function. Another popular approach is to consider an hypothesis class that moves closer the two distributions while implying a low-error for both tasks. This is a VC-dim approach that restricts the complexity of an hypothesis class in order to get good generalization. Instead, we propose a PAC-Bayesian approach that seeks for suitable weights to be given to each hypothesis in order to build a majority vote. We prove a new DA bound in the PAC-Bayesian context. This leads us to design the first DA-PAC-Bayesian algorithm based on the minimization of the proposed bound. Doing so, we seek for a \rho-weighted majority vote that takes into account a trade-off between three quantities. The first two quantities being, as usual in the PAC-Bayesian approach, (a) the complexity of the majority vote (measured by a Kullback-Leibler divergence) and (b) its empirical risk (measured by the \rho-average errors on the source sample). The third quantity is (c) the capacity of the majority vote to distinguish some structural difference between the source and target samples.; Comment: https://sites.google.com/site/multitradeoffs2012/

## A scaled gradient projection method for Bayesian learning in dynamical systems

Bonettini, Silvia; Chiuso, Alessandro; Prato, Marco
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
47.28901%
A crucial task in system identification problems is the selection of the most appropriate model class, and is classically addressed resorting to cross-validation or using asymptotic arguments. As recently suggested in the literature, this can be addressed in a Bayesian framework, where model complexity is regulated by few hyperparameters, which can be estimated via marginal likelihood maximization. It is thus of primary importance to design effective optimization methods to solve the corresponding optimization problem. If the unknown impulse response is modeled as a Gaussian process with a suitable kernel, the maximization of the marginal likelihood leads to a challenging nonconvex optimization problem, which requires a stable and effective solution strategy. In this paper we address this problem by means of a scaled gradient projection algorithm, in which the scaling matrix and the steplength parameter play a crucial role to provide a meaning solution in a computational time comparable with second order methods. In particular, we propose both a generalization of the split gradient approach to design the scaling matrix in the presence of box constraints, and an effective implementation of the gradient and objective function. The extensive numerical experiments carried out on several test problems show that our method is very effective in providing in few tenths of a second solutions of the problems with accuracy comparable with state-of-the-art approaches. Moreover...

## Efficient Bayesian Learning in Social Networks with Gaussian Estimators

Mossel, Elchanan; Tamuz, Omer
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
47.550977%
We propose a Bayesian model of iterative learning on social networks that is computationally tractable; the agents of this model are fully rational, and their calculations can be performed with modest computational resources for large networks. Furthermore, learning is efficient, in the sense that the process results in an information-theoretically optimal belief. This result extends Condorcet's Jury Theorem to general social networks, preserving rationality, computational feasibility and efficient learning. The model consists of a group of agents who belong to a social network, so that a pair of agents can observe each other's actions only if they are neighbors. We assume that the network is connected and that the agents have full knowledge of the structure of the network, so that they know the members of the network and their social connections. The agents try to estimate some state of the world S (say, the price of oil a year from today). Each agent has a private measurement: an independently acquired piece of information regarding S. This is modeled, for agent v, by a number S_v picked from a Gaussian distribution with mean S and standard deviation one. Accordingly, agent v's prior belief regarding S is a normal distribution with mean S_v and standard deviation one. The agents start acting iteratively. At each iteration...

## Bayesian learning of noisy Markov decision processes

Singh, Sumeetpal S.; Chopin, Nicolas; Whiteley, Nick
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
47.38005%
We consider the inverse reinforcement learning problem, that is, the problem of learning from, and then predicting or mimicking a controller based on state/action data. We propose a statistical model for such data, derived from the structure of a Markov decision process. Adopting a Bayesian approach to inference, we show how latent variables of the model can be estimated, and how predictions about actions can be made, in a unified framework. A new Markov chain Monte Carlo (MCMC) sampler is devised for simulation from the posterior distribution. This step includes a parameter expansion step, which is shown to be essential for good convergence properties of the MCMC sampler. As an illustration, the method is applied to learning a human controller.

## A Robust Independence Test for Constraint-Based Learning of Causal Structure

Dash, Denver; Druzdzel, Marek J.
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
47.408916%
Constraint-based (CB) learning is a formalism for learning a causal network with a database D by performing a series of conditional-independence tests to infer structural information. This paper considers a new test of independence that combines ideas from Bayesian learning, Bayesian network inference, and classical hypothesis testing to produce a more reliable and robust test. The new test can be calculated in the same asymptotic time and space required for the standard tests such as the chi-squared test, but it allows the specification of a prior distribution over parameters and can be used when the database is incomplete. We prove that the test is correct, and we demonstrate empirically that, when used with a CB causal discovery algorithm with noninformative priors, it recovers structural features more reliably and it produces networks with smaller KL-Divergence, especially as the number of nodes increases or the number of records decreases. Another benefit is the dramatic reduction in the probability that a CB algorithm will stall during the search, providing a remedy for an annoying problem plaguing CB learning when the database is small.; Comment: Appears in Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence (UAI2003)

## Bayesian Learning of Neural Networks for Signal/Background Discrimination in Particle Physics

Pogwizd, Michael; Elgass, Laura Jane; Bhat, Pushpalatha C.
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
47.487085%
Neural networks are used extensively in classification problems in particle physics research. Since the training of neural networks can be viewed as a problem of inference, Bayesian learning of neural networks can provide more optimal and robust results than conventional learning methods. We have investigated the use of Bayesian neural networks for signal/background discrimination in the search for second generation leptoquarks at the Tevatron, as an example. We present a comparison of the results obtained from the conventional training of feedforward neural networks and networks trained with Bayesian methods.; Comment: 3 pages, 4 figures, conference proceedings

## Computationally Efficient Sparse Bayesian Learning via Generalized Approximate Message Passing

Li, Fuwei; Fang, Jun; Duan, Huiping; Chen, Zhi; Li, Hongbin
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
47.56512%
The sparse Beyesian learning (also referred to as Bayesian compressed sensing) algorithm is one of the most popular approaches for sparse signal recovery, and has demonstrated superior performance in a series of experiments. Nevertheless, the sparse Bayesian learning algorithm has computational complexity that grows exponentially with the dimension of the signal, which hinders its application to many practical problems even with moderately large data sets. To address this issue, in this paper, we propose a computationally efficient sparse Bayesian learning method via the generalized approximate message passing (GAMP) technique. Specifically, the algorithm is developed within an expectation-maximization (EM) framework, using GAMP to efficiently compute an approximation of the posterior distribution of hidden variables. The hyperparameters associated with the hierarchical Gaussian prior are learned by iteratively maximizing the Q-function which is calculated based on the posterior approximation obtained from the GAMP. Numerical results are provided to illustrate the computational efficacy and the effectiveness of the proposed algorithm.

## On The Sparse Bayesian Learning Of Linear Models

Tipo: Artigo de Revista Científica
Relevância na Pesquisa
47.3819%
This work is a re-examination of the sparse Bayesian learning (SBL) of linear regression models of Tipping (2001) in a high-dimensional setting. We propose a hard-thresholded version of the SBL estimator that achieves, for orthogonal design matrices, the non-asymptotic estimation error rate of $\sigma\sqrt{s\log p}/\sqrt{n}$, where $n$ is the sample size, $p$ the number of regressors, $\sigma$ is the regression model standard deviation, and $s$ the number of non-zero regression coefficients. We also establish that with high-probability the estimator identifies the non-zero regression coefficients. In our simulations we found that sparse Bayesian learning regression performs better than lasso (Tibshirani (1996)) when the signal to be recovered is strong.; Comment: 23 pages, 9 figures

## Bayesian Learning of Loglinear Models for Neural Connectivity

Tipo: Artigo de Revista Científica
Relevância na Pesquisa
47.28901%
This paper presents a Bayesian approach to learning the connectivity structure of a group of neurons from data on configuration frequencies. A major objective of the research is to provide statistical tools for detecting changes in firing patterns with changing stimuli. Our framework is not restricted to the well-understood case of pair interactions, but generalizes the Boltzmann machine model to allow for higher order interactions. The paper applies a Markov Chain Monte Carlo Model Composition (MC3) algorithm to search over connectivity structures and uses Laplace's method to approximate posterior probabilities of structures. Performance of the methods was tested on synthetic data. The models were also applied to data obtained by Vaadia on multi-unit recordings of several neurons in the visual cortex of a rhesus monkey in two different attentional states. Results confirmed the experimenters' conjecture that different attentional states were associated with different interaction structures.; Comment: Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)

## Hidden states, hidden structures: Bayesian learning in time series models

Murphy, James Kevin
Fonte: University of Cambridge; Department of Engineering Publicador: University of Cambridge; Department of Engineering
Tipo: Thesis; doctoral; PhD
Português
Relevância na Pesquisa
47.3819%
This thesis presents methods for the inference of system state and the learning of model structure for a number of hidden-state time series models, within a Bayesian probabilistic framework. Motivating examples are taken from application areas including finance, physical object tracking and audio restoration. The work in this thesis can be broadly divided into three themes: system and parameter estimation in linear jump-diffusion systems, non-parametric model (system) estimation and batch audio restoration. For linear jump-diffusion systems, efficient state estimation methods based on the variable rate particle filter are presented for the general linear case (chapter 3) and a new method of parameter estimation based on Particle MCMC methods is introduced and tested against an alternative method using reversible-jump MCMC (chapter 4). Non-parametric model estimation is examined in two settings: the estimation of non-parametric environment models in a SLAM-style problem, and the estimation of the network structure and forms of linkage between multiple objects. In the former case, a non-parametric Gaussian process prior model is used to learn a potential field model of the environment in which a target moves. Efficient solution methods based on Rao-Blackwellized particle filters are given (chapter 5). In the latter case...

## Bayesian Learning Using Automatic Relevance Determination Prior with an Application to Earthquake Early Warning

Oh, Chang Kook; Beck, James L.; Yamada, Masumi
Fonte: American Society of Civil Engineers Publicador: American Society of Civil Engineers
Tipo: Article; PeerReviewed Formato: application/pdf
Relevância na Pesquisa
47.3819%
A novel method of Bayesian learning with automatic relevance determination prior is presented that provides a powerful approach to problems of classification based on data features, for example, classifying soil liquefaction potential based on soil and seismic shaking parameters, automatically classifying the damage states of a structure after severe loading based on features of its dynamic response, and real-time classification of earthquakes based on seismic signals. After introduction of the theory, the method is illustrated by applying it to an earthquake record dataset from nine earthquakes to build an efficient real-time algorithm for near-source versus far-source classification of incoming seismic ground motion signals. This classification is needed in the development of early warning systems for large earthquakes. It is shown that the proposed methodology is promising since it provides a classifier with higher correct classification rates and better generalization performance than a previous Bayesian learning method with a fixed prior distribution that was applied to the same classification problem.

## Non-parametric Bayesian Learning with Incomplete Data

Wang, Chunping
Tipo: Dissertação
Relevância na Pesquisa
47.68924%

In most machine learning approaches, it is usually assumed that data are complete. When data are partially missing due to various reasons, for example, the failure of a subset of sensors, image corruption or inadequate medical measurements, many learning methods designed for complete data cannot be directly applied. In this dissertation we treat two kinds of problems with incomplete data using non-parametric Bayesian approaches: classification with incomplete features and analysis of low-rank matrices with missing entries.

Incomplete data in classification problems are handled by assuming input features to be generated from a mixture-of-experts model, with each individual expert (classifier) defined by a local Gaussian in feature space. With a linear classifier associated with each Gaussian component, nonlinear classification boundaries are achievable without the introduction of kernels. Within the proposed model, the number of components is theoretically infinite'' as defined by a Dirichlet process construction, with the actual number of mixture components (experts) needed inferred based upon the data under test. With a higher-level DP we further extend the classifier for analysis of multiple related tasks (multi-task learning)...

Jeon, HyungJu