Página 8 dos resultados de 2413 itens digitais encontrados em 0.009 segundos

Feature Dynamic Bayesian Networks

Hutter, Marcus
Fonte: Atlantis Press Publicador: Atlantis Press
Tipo: Conference paper
Português
Relevância na Pesquisa
37.512798%
Feature Markov Decision Processes (MDPs) [Hut09] are well-suited for learning agents in general environments. Nevertheless, unstructured ()MDPs are limited to rela- tively simple environments. Structured MDPs like Dynamic Bayesian Networks (DBNs) are used for large-scale real- world problems. In this article I extend MDP to DBN. The primary contribution is to derive a cost criterion that al- lows to automatically extract the most relevant features from the environment, leading to the "best" DBN representation. I discuss all building blocks required for a complete general learning algorithm.

Bayesian Methods for Learning Analytics

Waters, Andrew
Fonte: Universidade Rice Publicador: Universidade Rice
Português
Relevância na Pesquisa
37.550977%
Learning Analytics (LA) is a broad umbrella term used to describe statistical models and algorithms for understanding the relationship be- tween a set of learners and a set of questions. The end goal of LA is to understand the dynamics of the responses provided by each learner. LA models serve to answer important questions concerning learners and questions, such as which educational concepts a learner understands well, which ones they do not, and how these concepts relate to the individual question. LA models additionally predict future learning outcomes based on learner performance to date. This information can then be used to adapt learning to achieve specific educational goals. In this thesis, we adopt a fully Bayesian approach to LA, which allows us both to have superior flexibility in modeling as well as achieve superior performance over methods based on convex optimization. We first develop novel models and algorithms for LA. We showcase the performance of these methods on both synthetic as well as real-world educational datasets. Second, we apply our LA framework to the problem of collaboration– type detection in educational data sets. Collaboration amongst learners in educational settings is problematic for two reasons. First...

Colaboração em ambientes inteligentes de aprendizagem mediada por um agente social probabilístico; Collaboration in intelligent learning environments supported by a probabilistic social agent

Boff, Elisa
Fonte: Universidade Federal do Rio Grande do Sul Publicador: Universidade Federal do Rio Grande do Sul
Tipo: Tese de Doutorado Formato: application/pdf
Português
Relevância na Pesquisa
37.512798%
Este trabalho propõe um modelo probabilístico de conhecimento e raciocínio para um agente, denominado Agente Social, cujo principal objetivo é analisar o perfil dos alunos, usuários de um Sistema Tutor Inteligente chamado AMPLIA, e compor grupos de trabalho. Para formar estes grupos, o Agente Social considera aspectos individuais do aluno e estratégias de formação de grupos. A aprendizagem colaborativa envolve relações sociais cujos processos são complexos e apresentam dificuldade para sua modelagem computacional. A fim de representar alguns elementos deste processo e de seus participantes, devem ser considerados aspectos individuais, tais como estado afetivo, questões psicológicas e cognição. Também devem ser considerados aspectos sociais, tais como a habilidade social, a aceitação e a forma em que as pessoas se relacionam e compõem seus grupos de trabalho ou estudo. Sistemas Tutores Inteligentes, Sistemas Multiagente e Computação Afetiva são áreas de pesquisa que vem sendo investigadas de forma a oferecer alternativas para representar e tratar computacionalmente alguns destes aspectos multidisciplinares que acompanham a aprendizagem individual e colaborativa. O Agente Social está inserido na sociedade de agentes do portal PortEdu que...

Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data

Zhou, Luping; Wang, Lei; Liu, Lingqiao; Ogunbona, Philip; Shen, Dinggang
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 23/06/2015 Português
Relevância na Pesquisa
37.550977%
Due to its causal semantics, Bayesian networks (BN) have been widely employed to discover the underlying data relationship in exploratory studies, such as brain research. Despite its success in modeling the probability distribution of variables, BN is naturally a generative model, which is not necessarily discriminative. This may cause the ignorance of subtle but critical network changes that are of investigation values across populations. In this paper, we propose to improve the discriminative power of BN models for continuous variables from two different perspectives. This brings two general discriminative learning frameworks for Gaussian Bayesian networks (GBN). In the first framework, we employ Fisher kernel to bridge the generative models of GBN and the discriminative classifiers of SVMs, and convert the GBN parameter learning to Fisher kernel learning via minimizing a generalization error bound of SVMs. In the second framework, we employ the max-margin criterion and build it directly upon GBN models to explicitly optimize the classification performance of the GBNs. The advantages and disadvantages of the two frameworks are discussed and experimentally compared. Both of them demonstrate strong power in learning discriminative parameters of GBNs for neuroimaging based brain network analysis...

Efficient Bayes-Adaptive Reinforcement Learning using Sample-Based Search

Guez, Arthur; Silver, David; Dayan, Peter
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
37.512798%
Bayesian model-based reinforcement learning is a formally elegant approach to learning optimal behaviour under model uncertainty, trading off exploration and exploitation in an ideal way. Unfortunately, finding the resulting Bayes-optimal policies is notoriously taxing, since the search space becomes enormous. In this paper we introduce a tractable, sample-based method for approximate Bayes-optimal planning which exploits Monte-Carlo tree search. Our approach outperformed prior Bayesian model-based RL algorithms by a significant margin on several well-known benchmark problems -- because it avoids expensive applications of Bayes rule within the search tree by lazily sampling models from the current beliefs. We illustrate the advantages of our approach by showing it working in an infinite state space domain which is qualitatively out of reach of almost all previous work in Bayesian exploration.; Comment: 14 pages, 7 figures, includes supplementary material. Advances in Neural Information Processing Systems (NIPS) 2012

Online Bayesian Passive-Aggressive Learning

Shi, Tianlin; Zhu, Jun
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 11/12/2013 Português
Relevância na Pesquisa
37.550977%
Online Passive-Aggressive (PA) learning is an effective framework for performing max-margin online learning. But the deterministic formulation and estimated single large-margin model could limit its capability in discovering descriptive structures underlying complex data. This pa- per presents online Bayesian Passive-Aggressive (BayesPA) learning, which subsumes the online PA and extends naturally to incorporate latent variables and perform nonparametric Bayesian inference, thus providing great flexibility for explorative analysis. We apply BayesPA to topic modeling and derive efficient online learning algorithms for max-margin topic models. We further develop nonparametric methods to resolve the number of topics. Experimental results on real datasets show that our approaches significantly improve time efficiency while maintaining comparable results with the batch counterparts.; Comment: 10 Pages. ICML 2014, Beijing, China

Kernelized Bayesian Matrix Factorization

Gönen, Mehmet; Khan, Suleiman A.; Kaski, Samuel
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
37.512798%
We extend kernelized matrix factorization with a fully Bayesian treatment and with an ability to work with multiple side information sources expressed as different kernels. Kernel functions have been introduced to matrix factorization to integrate side information about the rows and columns (e.g., objects and users in recommender systems), which is necessary for making out-of-matrix (i.e., cold start) predictions. We discuss specifically bipartite graph inference, where the output matrix is binary, but extensions to more general matrices are straightforward. We extend the state of the art in two key aspects: (i) A fully conjugate probabilistic formulation of the kernelized matrix factorization problem enables an efficient variational approximation, whereas fully Bayesian treatments are not computationally feasible in the earlier approaches. (ii) Multiple side information sources are included, treated as different kernels in multiple kernel learning that additionally reveals which side information sources are informative. Our method outperforms alternatives in predicting drug-protein interactions on two data sets. We then show that our framework can also be used for solving multilabel learning problems by considering samples and labels as the two domains where matrix factorization operates on. Our algorithm obtains the lowest Hamming loss values on 10 out of 14 multilabel classification data sets compared to five state-of-the-art multilabel learning algorithms.; Comment: Proceedings of the 30th International Conference on Machine Learning (2013)

A Branch-and-Bound Algorithm for MDL Learning Bayesian Networks

Tian, Jin
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 16/01/2013 Português
Relevância na Pesquisa
37.512798%
This paper extends the work in [Suzuki, 1996] and presents an efficient depth-first branch-and-bound algorithm for learning Bayesian network structures, based on the minimum description length (MDL) principle, for a given (consistent) variable ordering. The algorithm exhaustively searches through all network structures and guarantees to find the network with the best MDL score. Preliminary experiments show that the algorithm is efficient, and that the time complexity grows slowly with the sample size. The algorithm is useful for empirically studying both the performance of suboptimal heuristic search algorithms and the adequacy of the MDL principle in learning Bayesian networks.; Comment: Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000)

Robust learning Bayesian networks for prior belief

Ueno, Maomi
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 14/02/2012 Português
Relevância na Pesquisa
37.512798%
Recent reports have described that learning Bayesian networks are highly sensitive to the chosen equivalent sample size (ESS) in the Bayesian Dirichlet equivalence uniform (BDeu). This sensitivity often engenders some unstable or undesirable results. This paper describes some asymptotic analyses of BDeu to explain the reasons for the sensitivity and its effects. Furthermore, this paper presents a proposal for a robust learning score for ESS by eliminating the sensitive factors from the approximation of log-BDeu.

Stochastic Expectation Propagation

Li, Yingzhen; Hernandez-Lobato, Jose Miguel; Turner, Richard E.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
37.550977%
Expectation propagation (EP) is a deterministic approximation algorithm that is often used to perform approximate Bayesian parameter learning. EP approximates the full intractable posterior distribution through a set of local approximations that are iteratively refined for each datapoint. EP can offer analytic and computational advantages over other approximations, such as Variational Inference (VI), and is the method of choice for a number of models. The local nature of EP appears to make it an ideal candidate for performing Bayesian learning on large models in large-scale dataset settings. However, EP has a crucial limitation in this context: the number of approximating factors needs to increase with the number of data-points, N, which often entails a prohibitively large memory overhead. This paper presents an extension to EP, called stochastic expectation propagation (SEP), that maintains a global posterior approximation (like VI) but updates it in a local way (like EP). Experiments on a number of canonical learning problems using synthetic and real-world datasets indicate that SEP performs almost as well as full EP, but reduces the memory consumption by a factor of $N$. SEP is therefore ideally suited to performing approximate Bayesian learning in the large model...

On the Number of Samples Needed to Learn the Correct Structure of a Bayesian Network

Zuk, Or; Margel, Shiri; Domany, Eytan
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 27/06/2012 Português
Relevância na Pesquisa
37.550977%
Bayesian Networks (BNs) are useful tools giving a natural and compact representation of joint probability distributions. In many applications one needs to learn a Bayesian Network (BN) from data. In this context, it is important to understand the number of samples needed in order to guarantee a successful learning. Previous work have studied BNs sample complexity, yet it mainly focused on the requirement that the learned distribution will be close to the original distribution which generated the data. In this work, we study a different aspect of the learning, namely the number of samples needed in order to learn the correct structure of the network. We give both asymptotic results, valid in the large sample limit, and experimental results, demonstrating the learning behavior for feasible sample sizes. We show that structure learning is a more difficult task, compared to approximating the correct distribution, in the sense that it requires a much larger number of samples, regardless of the computational power available for the learner.; Comment: Appears in Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence (UAI2006)

The threshold EM algorithm for parameter learning in bayesian network with incomplete data

Lamine, Fradj Ben; Kalti, Karim; Mahjoub, Mohamed Ali
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 07/04/2012 Português
Relevância na Pesquisa
37.550977%
Bayesian networks (BN) are used in a big range of applications but they have one issue concerning parameter learning. In real application, training data are always incomplete or some nodes are hidden. To deal with this problem many learning parameter algorithms are suggested foreground EM, Gibbs sampling and RBE algorithms. In order to limit the search space and escape from local maxima produced by executing EM algorithm, this paper presents a learning parameter algorithm that is a fusion of EM and RBE algorithms. This algorithm incorporates the range of a parameter into the EM algorithm. This range is calculated by the first step of RBE algorithm allowing a regularization of each parameter in bayesian network after the maximization step of the EM algorithm. The threshold EM algorithm is applied in brain tumor diagnosis and show some advantages and disadvantages over the EM algorithm.; Comment: 6 pages

Comparing Bayesian Network Classifiers

Cheng, Jie; Greiner, Russell
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 23/01/2013 Português
Relevância na Pesquisa
37.550977%
In this paper, we empirically evaluate algorithms for learning four types of Bayesian network (BN) classifiers - Naive-Bayes, tree augmented Naive-Bayes, BN augmented Naive-Bayes and general BNs, where the latter two are learned using two variants of a conditional-independence (CI) based BN-learning algorithm. Experimental results show the obtained classifiers, learned using the CI based algorithms, are competitive with (or superior to) the best known classifiers, based on both Bayesian networks and other formalisms; and that the computational time for learning and using these classifiers is relatively small. Moreover, these results also suggest a way to learn yet more effective classifiers; we demonstrate empirically that this new algorithm does work as expected. Collectively, these results argue that BN classifiers deserve more attention in machine learning and data mining communities.; Comment: Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999)

Bayesian Analysis for miRNA and mRNA Interactions Using Expression Data

Zhong, Mingjun; Liu, Rong; Liu, Bo
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
37.5219%
MicroRNAs (miRNAs) are small RNA molecules composed of 19-22 nt, which play important regulatory roles in post-transcriptional gene regulation by inhibiting the translation of the mRNA into proteins or otherwise cleaving the target mRNA. Inferring miRNA targets provides useful information for understanding the roles of miRNA in biological processes that are potentially involved in complex diseases. Statistical methodologies for point estimation, such as the Least Absolute Shrinkage and Selection Operator (LASSO) algorithm, have been proposed to identify the interactions of miRNA and mRNA based on sequence and expression data. In this paper, we propose using the Bayesian LASSO (BLASSO) and the non-negative Bayesian LASSO (nBLASSO) to analyse the interactions between miRNA and mRNA using expression data. The proposed Bayesian methods explore the posterior distributions for those parameters required to model the miRNA-mRNA interactions. These approaches can be used to observe the inferred effects of the miRNAs on the targets by plotting the posterior distributions of those parameters. For comparison purposes, the Least Squares Regression (LSR), Ridge Regression (RR), LASSO, non-negative LASSO (nLASSO), and the proposed Bayesian approaches were applied to four public datasets. We concluded that nLASSO and nBLASSO perform best in terms of sensitivity and specificity. Compared to the point estimate algorithms...

Top-down particle filtering for Bayesian decision trees

Lakshminarayanan, Balaji; Roy, Daniel M.; Teh, Yee Whye
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
37.550977%
Decision tree learning is a popular approach for classification and regression in machine learning and statistics, and Bayesian formulations---which introduce a prior distribution over decision trees, and formulate learning as posterior inference given data---have been shown to produce competitive performance. Unlike classic decision tree learning algorithms like ID3, C4.5 and CART, which work in a top-down manner, existing Bayesian algorithms produce an approximation to the posterior distribution by evolving a complete tree (or collection thereof) iteratively via local Monte Carlo modifications to the structure of the tree, e.g., using Markov chain Monte Carlo (MCMC). We present a sequential Monte Carlo (SMC) algorithm that instead works in a top-down manner, mimicking the behavior and speed of classic algorithms. We demonstrate empirically that our approach delivers accuracy comparable to the most popular MCMC method, but operates more than an order of magnitude faster, and thus represents a better computation-accuracy tradeoff.; Comment: ICML 2013

Bayesian Network Structure Learning with Permutation Tests

Scutari, Marco; Brogini, Adriana
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
37.550977%
In literature there are several studies on the performance of Bayesian network structure learning algorithms. The focus of these studies is almost always the heuristics the learning algorithms are based on, i.e. the maximisation algorithms (in score-based algorithms) or the techniques for learning the dependencies of each variable (in constraint-based algorithms). In this paper we investigate how the use of permutation tests instead of parametric ones affects the performance of Bayesian network structure learning from discrete data. Shrinkage tests are also covered to provide a broad overview of the techniques developed in current literature.; Comment: 13 pages, 4 figures. Presented at the Conference 'Statistics for Complex Problems', Padova, June 15, 2010

Large-Sample Learning of Bayesian Networks is NP-Hard

Chickering, David Maxwell; Meek, Christopher; Heckerman, David
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 19/10/2012 Português
Relevância na Pesquisa
37.550977%
In this paper, we provide new complexity results for algorithms that learn discrete-variable Bayesian networks from data. Our results apply whenever the learning algorithm uses a scoring criterion that favors the simplest model able to represent the generative distribution exactly. Our results therefore hold whenever the learning algorithm uses a consistent scoring criterion and is applied to a sufficiently large dataset. We show that identifying high-scoring structures is hard, even when we are given an independence oracle, an inference oracle, and/or an information oracle. Our negative results also apply to the learning of discrete-variable Bayesian networks in which each node has at most k parents, for all k > 3.; Comment: Appears in Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence (UAI2003)

A Bayesian alternative to mutual information for the hierarchical clustering of dependent random variables

Marrelec, Guillaume; Messé, Arnaud; Bellec, Pierre
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
37.5219%
The use of mutual information as a similarity measure in agglomerative hierarchical clustering (AHC) raises an important issue: some correction needs to be applied for the dimensionality of variables. In this work, we formulate the decision of merging dependent multivariate normal variables in an AHC procedure as a Bayesian model comparison. We found that the Bayesian formulation naturally shrinks the empirical covariance matrix towards a matrix set a priori (e.g., the identity), provides an automated stopping rule, and corrects for dimensionality using a term that scales up the measure as a function of the dimensionality of the variables. Also, the resulting log Bayes factor is asymptotically proportional to the plug-in estimate of mutual information, with an additive correction for dimensionality in agreement with the Bayesian information criterion. We investigated the behavior of these Bayesian alternatives (in exact and asymptotic forms) to mutual information on simulated and real data. An encouraging result was first derived on simulations: the hierarchical clustering based on the log Bayes factor outperformed off-the-shelf clustering techniques as well as raw and normalized mutual information in terms of classification accuracy. On a toy example...

An evolutionary algorithmic approach to learning a Bayesian network from complete data

Sahin, Ferat; Tillett, Jason; Rao, Raghuveer; Rao, T.
Fonte: SPIE Publicador: SPIE
Tipo: Proceedings
Português
Relevância na Pesquisa
37.55834%
Discovering relationships between variables is crucial for interpreting data from large databases. Relationships between variables can be modeled using a Bayesian network. The challenge of learning a Bayesian network from a complete dataset grows exponentially with the number of variables in the database and the number of states in each variable. It therefore becomes important to identify promising heuristics for exploring the space of possible networks. This paper utilizes an evolutionary algorithmic approach, Particle Swarm Optimization (PSO) to perform this search. A fundamental problem with a search for a Bayesian network is that of handling cyclic networks, which are not allowed. This paper explores the PSO approach, handling cyclic networks in two different ways. Results of network extraction for the well-studied ALARM network are presented for PSO simulations where cycles are broken heuristically at each step of the optimization and where networks with cycles are allowed to exist as candidate solutions, but are assigned a poor fitness. The results of the two approaches are compared and it is found that allowing cyclic networks to exist in the particle swarm of candidate solutions can dramatically reduce the number of objective function evaluations required to converge to a target fitness value.; Copyright 2004 Society of Photo-Optical Instrumentation Engineers. These proceedings were published at the SPIE defense and security symposium and is made available as an electronic reprint (preprint) with permission of SPIE. One print or electronic copy may be made for personal use only. Systematic or multiple reproduction...

Redes bayesianas para predecir el estilo de aprendizaje de estudiantes en entornos virtuales; Redes bayesianas para predecir el estilo de aprendizaje de estudiantes en entornos virtuales; Bayesian networks to predict the learning style of student in virtual environments

López-Faican, Lissette Geoconda; Universidad Nacional de Loja - UNL; Chamba-Eras, Luis Antonio; Universidad Internacional del Ecuador - UIDE
Fonte: Mestrado Interdisciplinar em Ciência, Gestão e Tecnologia da Informação - UFPR Publicador: Mestrado Interdisciplinar em Ciência, Gestão e Tecnologia da Informação - UFPR
Tipo: info:eu-repo/semantics/article; info:eu-repo/semantics/publishedVersion; Artículo evaluado por pares; experimental; ; experimental; Avaliado pelos pares; experimental Formato: text/html; application/pdf; application/epub+zip
Publicado em 05/03/2015 Português
Relevância na Pesquisa
37.550977%
Introduction: It describes the use of Bayesian Networks to implement a model of uncertainty to predict the learning style of students through their interaction in a virtual learning environment based on the Felder-Silverman model. Method: The model uncertainty was designed and developed to be integrated in the LMS Moodle. In order to validate the proposed model, an actual educational scenario was built and two groups - one from the National University of Loja and other from the International University of Ecuador - were exposed to the experiment. Results: The block "Learning Style" (EA) allowed students to visualize the probabilities of each dimension of their EA by observing that, according to their interactions, these probabilities changed. Likewise, the teachers could visualize the probabilities of EA obtained by each student when these interactions were done in the hosted virtual course enclosed in the Virtual Learning Environment. Conclusion: The proposal may serve as support for teachers who want to identify predominant learning styles of their students and, based on that, prepare activities and resources in the courses under their responsibilities.; Introducción: Describe la utilización de las Redes Bayesianas para implementar un modelo de incertidumbre que permita predecir el estilo de aprendizaje de los estudiantes mediante la interacción en un entorno virtual de aprendizaje basado en el modelo de Felder-Silverman. Método: El modelo de incertidumbre se lo diseño y desarrolló para el funcionamiento en el LMS Moodle. Para validar el modelo propuesto se planteó un escenario educativo real conformado por dos grupos experimentales pertenecientes a la Universidad Nacional de Loja y Universidad Internacional del Ecuador. Resultados: El bloque “Estilo de Aprendizaje” (EA) permitió a los estudiantes visualizar las probabilidades de cada dimensión de su EA observando que...