# A melhor ferramenta para a sua pesquisa, trabalho e TCC!

Página 16 dos resultados de 2413 itens digitais encontrados em 0.009 segundos

## Redes Bayesianas aplicadas à análise do risco de crédito.; Bayesian networks applied to the anilysis of credit risk.

Fonte: Biblioteca Digitais de Teses e Dissertações da USP
Publicador: Biblioteca Digitais de Teses e Dissertações da USP

Tipo: Dissertação de Mestrado
Formato: application/pdf

Publicado em 26/02/2009
Português

Relevância na Pesquisa

37.279807%

#Bayesian networks#Credit risk#Crédito#Estatística para inteligência artificial#Inferência estatística#Logistic regression#Modelos lineares generalizados

Modelos de Credit Scoring são utilizados para estimar a probabilidade de um cliente proponente ao crédito se tornar inadimplente, em determinado período, baseadas em suas informações pessoais e financeiras. Neste trabalho, a técnica proposta em Credit Scoring é Redes Bayesianas (RB) e seus resultados foram comparados aos da Regressão Logística. As RB avaliadas foram as Bayesian Network Classifiers, conhecidas como Classificadores Bayesianos, com seguintes tipos de estrutura: Naive Bayes, Tree Augmented Naive Bayes (TAN) e General Bayesian Network (GBN). As estruturas das RB foram obtidas por Aprendizado de Estrutura a partir de uma base de dados real. Os desempenhos dos modelos foram avaliados e comparados através das taxas de acerto obtidas da Matriz de Confusão, da estatística Kolmogorov-Smirnov e coeficiente Gini. As amostras de desenvolvimento e de validação foram obtidas por Cross-Validation com 10 partições. A análise dos modelos ajustados mostrou que as RB e a Regressão Logística apresentaram desempenho similar, em relação a estatística Kolmogorov- Smirnov e ao coeficiente Gini. O Classificador TAN foi escolhido como o melhor modelo, pois apresentou o melhor desempenho nas previsões dos clientes maus pagadores e permitiu uma análise dos efeitos de interação entre variáveis.; Credit Scoring Models are used to estimate the insolvency probability of a customer...

Link permanente para citações:

## Python Environment for Bayesian Learning: Inferring the Structure of Bayesian Networks from Knowledge and Data

Fonte: PubMed
Publicador: PubMed

Tipo: Artigo de Revista Científica

Publicado em 01/06/2009
Português

Relevância na Pesquisa

37.255088%

In this paper, we introduce pebl, a Python library and application for learning Bayesian network structure from data and prior knowledge that provides features unmatched by alternative software packages: the ability to use interventional data, flexible specification of structural priors, modeling with hidden variables and exploitation of parallel processing.

Link permanente para citações:

## Stochastic comparisons and Bayesian inference in software reliability; Ordenaciones estocásticas e inferencia bayesiana en fiabilidad de software

Fonte: Universidade Carlos III de Madrid
Publicador: Universidade Carlos III de Madrid

Tipo: Tese de Doutorado
Formato: application/pdf

Português

Relevância na Pesquisa

37.255088%

#Stochastic comparisons#Bayesian inference#Software reliability#Ordenaciones estocásticas#Inferencia bayesiana#Fiabilidad de software#Estadística

Within the last decade of the 20th century and the first few years of the 21st century, the demand for complex software systems has increased, and therefore, the reliability of software systems has become a major concern for our modern society. Software reliability is defined as the probability of failure free software operations for a specified period of time in a specified environment. Many current software reliability techniques and practices are detailed by Lyu and Pham. From a statistical point of view, the random variables that characterize software reliability are the epoch times in which a failure of software takes place or the times between failures. Most of the well known models for software reliability are centered around the interfailure times or the point processes that they generate. A software reliability model specifies the general form of the dependence of the failure process on the principal factors that affect it: fault introduction, fault removal, and the operational environment. The purpose of this thesis is threefold: (1) to study stochastic properties of times between failures relative to independent but not identically distributed random variables; (2) to investigate properties of the epoch times of nonhomogeneous pure birth processes as an extension of nonhomogeneous Poisson processes used in the literature in software reliability modelling and...

Link permanente para citações:

## The Non-Bayesian Restless Multi-Armed Bandit: A Case of Near-Logarithmic Strict Regret

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 07/09/2011
Português

Relevância na Pesquisa

37.279807%

#Mathematics - Optimization and Control#Computer Science - Learning#Computer Science - Networking and Internet Architecture#Computer Science - Systems and Control#Mathematics - Probability

In the classic Bayesian restless multi-armed bandit (RMAB) problem, there are
$N$ arms, with rewards on all arms evolving at each time as Markov chains with
known parameters. A player seeks to activate $K \geq 1$ arms at each time in
order to maximize the expected total reward obtained over multiple plays. RMAB
is a challenging problem that is known to be PSPACE-hard in general. We
consider in this work the even harder non-Bayesian RMAB, in which the
parameters of the Markov chain are assumed to be unknown \emph{a priori}. We
develop an original approach to this problem that is applicable when the
corresponding Bayesian problem has the structure that, depending on the known
parameter values, the optimal solution is one of a prescribed finite set of
policies. In such settings, we propose to learn the optimal policy for the
non-Bayesian RMAB by employing a suitable meta-policy which treats each policy
from this finite set as an arm in a different non-Bayesian multi-armed bandit
problem for which a single-arm selection policy is optimal. We demonstrate this
approach by developing a novel sensing policy for opportunistic spectrum access
over unknown dynamic channels. We prove that our policy achieves
near-logarithmic regret (the difference in expected reward compared to a
model-aware genie)...

Link permanente para citações:

## Deriving a Stationary Dynamic Bayesian Network from a Logic Program with Recursive Loops

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 27/06/2005
Português

Relevância na Pesquisa

37.279807%

#Computer Science - Artificial Intelligence#Computer Science - Learning#Computer Science - Logic in Computer Science

Recursive loops in a logic program present a challenging problem to the PLP
framework. On the one hand, they loop forever so that the PLP backward-chaining
inferences would never stop. On the other hand, they generate cyclic
influences, which are disallowed in Bayesian networks. Therefore, in existing
PLP approaches logic programs with recursive loops are considered to be
problematic and thus are excluded. In this paper, we propose an approach that
makes use of recursive loops to build a stationary dynamic Bayesian network.
Our work stems from an observation that recursive loops in a logic program
imply a time sequence and thus can be used to model a stationary dynamic
Bayesian network without using explicit time parameters. We introduce a
Bayesian knowledge base with logic clauses of the form $A \leftarrow
A_1,...,A_l, true, Context, Types$, which naturally represents the knowledge
that the $A_i$s have direct influences on $A$ in the context $Context$ under
the type constraints $Types$. We then use the well-founded model of a logic
program to define the direct influence relation and apply SLG-resolution to
compute the space of random variables together with their parental connections.
We introduce a novel notion of influence clauses...

Link permanente para citações:

## Learning Bayesian Nets that Perform Well

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 06/02/2013
Português

Relevância na Pesquisa

37.28901%

A Bayesian net (BN) is more than a succinct way to encode a probabilistic
distribution; it also corresponds to a function used to answer queries. A BN
can therefore be evaluated by the accuracy of the answers it returns. Many
algorithms for learning BNs, however, attempt to optimize another criterion
(usually likelihood, possibly augmented with a regularizing term), which is
independent of the distribution of queries that are posed. This paper takes the
"performance criteria" seriously, and considers the challenge of computing the
BN whose performance - read "accuracy over the distribution of queries" - is
optimal. We show that many aspects of this learning task are more difficult
than the corresponding subtasks in the standard model.; Comment: Appears in Proceedings of the Thirteenth Conference on Uncertainty in
Artificial Intelligence (UAI1997)

Link permanente para citações:

## Bayesian Structure Learning for Markov Random Fields with a Spike and Slab Prior

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

37.28901%

In recent years a number of methods have been developed for automatically
learning the (sparse) connectivity structure of Markov Random Fields. These
methods are mostly based on L1-regularized optimization which has a number of
disadvantages such as the inability to assess model uncertainty and expensive
cross-validation to find the optimal regularization parameter. Moreover, the
model's predictive performance may degrade dramatically with a suboptimal value
of the regularization parameter (which is sometimes desirable to induce
sparseness). We propose a fully Bayesian approach based on a "spike and slab"
prior (similar to L0 regularization) that does not suffer from these
shortcomings. We develop an approximate MCMC method combining Langevin dynamics
and reversible jump MCMC to conduct inference in this model. Experiments show
that the proposed model learns a good combination of the structure and
parameter values without the need for separate hyper-parameter tuning.
Moreover, the model's predictive performance is much more robust than L1-based
methods with hyper-parameter settings that induce highly sparse model
structures.; Comment: Accepted in the Conference on Uncertainty in Artificial Intelligence
(UAI), 2012

Link permanente para citações:

## Bayesian Structure Learning for Markov Random Fields with a Spike and Slab Prior

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 09/08/2014
Português

Relevância na Pesquisa

37.28901%

In recent years a number of methods have been developed for automatically
learning the (sparse) connectivity structure of Markov Random Fields. These
methods are mostly based on L1-regularized optimization which has a number of
disadvantages such as the inability to assess model uncertainty and expensive
crossvalidation to find the optimal regularization parameter. Moreover, the
model's predictive performance may degrade dramatically with a suboptimal value
of the regularization parameter (which is sometimes desirable to induce
sparseness). We propose a fully Bayesian approach based on a "spike and slab"
prior (similar to L0 regularization) that does not suffer from these
shortcomings. We develop an approximate MCMC method combining Langevin dynamics
and reversible jump MCMC to conduct inference in this model. Experiments show
that the proposed model learns a good combination of the structure and
parameter values without the need for separate hyper-parameter tuning.
Moreover, the model's predictive performance is much more robust than L1-based
methods with hyper-parameter settings that induce highly sparse model
structures.; Comment: Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012)

Link permanente para citações:

## Learning Bayesian Networks with the bnlearn R Package

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

37.28901%

bnlearn is an R package which includes several algorithms for learning the
structure of Bayesian networks with either discrete or continuous variables.
Both constraint-based and score-based algorithms are implemented, and can use
the functionality provided by the snow package to improve their performance via
parallel computing. Several network scores and conditional independence
algorithms are available for both the learning algorithms and independent use.
Advanced plotting options are provided by the Rgraphviz package.; Comment: 22 pages, 4 pictures

Link permanente para citações:

## Bayesian Sparse Tucker Models for Dimension Reduction and Tensor Completion

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 10/05/2015
Português

Relevância na Pesquisa

37.28901%

Tucker decomposition is the cornerstone of modern machine learning on
tensorial data analysis, which have attracted considerable attention for
multiway feature extraction, compressive sensing, and tensor completion. The
most challenging problem is related to determination of model complexity (i.e.,
multilinear rank), especially when noise and missing data are present. In
addition, existing methods cannot take into account uncertainty information of
latent factors, resulting in low generalization performance. To address these
issues, we present a class of probabilistic generative Tucker models for tensor
decomposition and completion with structural sparsity over multilinear latent
space. To exploit structural sparse modeling, we introduce two group sparsity
inducing priors by hierarchial representation of Laplace and Student-t
distributions, which facilitates fully posterior inference. For model learning,
we derived variational Bayesian inferences over all model (hyper)parameters,
and developed efficient and scalable algorithms based on multilinear
operations. Our methods can automatically adapt model complexity and infer an
optimal multilinear rank by the principle of maximum lower bound of model
evidence. Experimental results and comparisons on synthetic...

Link permanente para citações:

## Multi-agent Inverse Reinforcement Learning for Zero-sum Games

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

37.28901%

#Computer Science - Computer Science and Game Theory#Computer Science - Artificial Intelligence#Computer Science - Learning

In this paper we introduce a Bayesian framework for solving a class of
problems termed Multi-agent Inverse Reinforcement Learning (MIRL). Compared to
the well-known Inverse Reinforcement Learning (IRL) problem, MIRL is formalized
in the context of a stochastic game rather than a Markov decision process
(MDP). Games bring two primary challenges: First, the concept of optimality,
central to MDPs, loses its meaning and must be replaced with a more general
solution concept, such as the Nash equilibrium. Second, the non-uniqueness of
equilibria means that in MIRL, in addition to multiple reasonable solutions for
a given inversion model, there may be multiple inversion models that are all
equally sensible approaches to solving the problem. We establish a theoretical
foundation for competitive two-agent MIRL problems and propose a Bayesian
optimization algorithm to solve the problem. We focus on the case of two-person
zero-sum stochastic games, developing a generative model for the likelihood of
unknown rewards of agents given observed game play assuming that the two agents
follow a minimax bipolicy. As a numerical illustration, we apply our method in
the context of an abstract soccer game. For the soccer game, we investigate
relationships between the extent of prior information and the quality of
learned rewards. Results suggest that covariance structure is more important
than mean value in reward priors.

Link permanente para citações:

## The Non-Bayesian Restless Multi-Armed Bandit: a Case of Near-Logarithmic Regret

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 22/11/2010
Português

Relevância na Pesquisa

37.279807%

#Mathematics - Optimization and Control#Computer Science - Learning#Computer Science - Networking and Internet Architecture#Mathematics - Probability

In the classic Bayesian restless multi-armed bandit (RMAB) problem, there are
$N$ arms, with rewards on all arms evolving at each time as Markov chains with
known parameters. A player seeks to activate $K \geq 1$ arms at each time in
order to maximize the expected total reward obtained over multiple plays. RMAB
is a challenging problem that is known to be PSPACE-hard in general. We
consider in this work the even harder non-Bayesian RMAB, in which the
parameters of the Markov chain are assumed to be unknown \emph{a priori}. We
develop an original approach to this problem that is applicable when the
corresponding Bayesian problem has the structure that, depending on the known
parameter values, the optimal solution is one of a prescribed finite set of
policies. In such settings, we propose to learn the optimal policy for the
non-Bayesian RMAB by employing a suitable meta-policy which treats each policy
from this finite set as an arm in a different non-Bayesian multi-armed bandit
problem for which a single-arm selection policy is optimal. We demonstrate this
approach by developing a novel sensing policy for opportunistic spectrum access
over unknown dynamic channels. We prove that our policy achieves
near-logarithmic regret (the difference in expected reward compared to a
model-aware genie)...

Link permanente para citações:

## Covariance-Controlled Adaptive Langevin Thermostat for Large-Scale Bayesian Sampling

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 29/10/2015
Português

Relevância na Pesquisa

37.28901%

Monte Carlo sampling for Bayesian posterior inference is a common approach
used in machine learning. The Markov Chain Monte Carlo procedures that are used
are often discrete-time analogues of associated stochastic differential
equations (SDEs). These SDEs are guaranteed to leave invariant the required
posterior distribution. An area of current research addresses the computational
benefits of stochastic gradient methods in this setting. Existing techniques
rely on estimating the variance or covariance of the subsampling error, and
typically assume constant variance. In this article, we propose a
covariance-controlled adaptive Langevin thermostat that can effectively
dissipate parameter-dependent noise while maintaining a desired target
distribution. The proposed method achieves a substantial speedup over popular
alternative schemes for large-scale machine learning applications.; Comment: Advances in Neural Information Processing Systems (NIPS), 2015

Link permanente para citações:

## Near Optimal Bayesian Active Learning for Decision Making

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 24/02/2014
Português

Relevância na Pesquisa

37.28901%

How should we gather information to make effective decisions? We address
Bayesian active learning and experimental design problems, where we
sequentially select tests to reduce uncertainty about a set of hypotheses.
Instead of minimizing uncertainty per se, we consider a set of overlapping
decision regions of these hypotheses. Our goal is to drive uncertainty into a
single decision region as quickly as possible.
We identify necessary and sufficient conditions for correctly identifying a
decision region that contains all hypotheses consistent with observations. We
develop a novel Hyperedge Cutting (HEC) algorithm for this problem, and prove
that is competitive with the intractable optimal policy. Our efficient
implementation of the algorithm relies on computing subsets of the complete
homogeneous symmetric polynomials. Finally, we demonstrate its effectiveness on
two practical applications: approximate comparison-based learning and active
localization using a robot manipulator.; Comment: Extended version of work appearing in the International conference on
Artificial Intelligence and Statistics (AISTATS) 2014

Link permanente para citações:

## Bayesian Sample Size Determination of Vibration Signals in Machine Learning Approach to Fault Diagnosis of Roller Bearings

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 25/02/2014
Português

Relevância na Pesquisa

37.28901%

Sample size determination for a data set is an important statistical process
for analyzing the data to an optimum level of accuracy and using minimum
computational work. The applications of this process are credible in every
domain which deals with large data sets and high computational work. This study
uses Bayesian analysis for determination of minimum sample size of vibration
signals to be considered for fault diagnosis of a bearing using pre-defined
parameters such as the inverse standard probability and the acceptable margin
of error. Thus an analytical formula for sample size determination is
introduced. The fault diagnosis of the bearing is done using a machine learning
approach using an entropy-based J48 algorithm. The following method will help
researchers involved in fault diagnosis to determine minimum sample size of
data for analysis for a good statistical stability and precision.; Comment: 14 pages, 1 table, 6 figures

Link permanente para citações:

## On Local Optima in Learning Bayesian Networks

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 19/10/2012
Português

Relevância na Pesquisa

37.28901%

#Computer Science - Learning#Computer Science - Artificial Intelligence#Statistics - Machine Learning

This paper proposes and evaluates the k-greedy equivalence search algorithm
(KES) for learning Bayesian networks (BNs) from complete data. The main
characteristic of KES is that it allows a trade-off between greediness and
randomness, thus exploring different good local optima. When greediness is set
at maximum, KES corresponds to the greedy equivalence search algorithm (GES).
When greediness is kept at minimum, we prove that under mild assumptions KES
asymptotically returns any inclusion optimal BN with nonzero probability.
Experimental results for both synthetic and real data are reported showing that
KES often finds a better local optima than GES. Moreover, we use KES to
experimentally confirm that the number of different local optima is often huge.; Comment: Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003)

Link permanente para citações:

## Scoring and Searching over Bayesian Networks with Causal and Associative Priors

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

37.28901%

#Computer Science - Artificial Intelligence#Computer Science - Learning#Statistics - Machine Learning

A significant theoretical advantage of search-and-score methods for learning
Bayesian Networks is that they can accept informative prior beliefs for each
possible network, thus complementing the data. In this paper, a method is
presented for assigning priors based on beliefs on the presence or absence of
certain paths in the true network. Such beliefs correspond to knowledge about
the possible causal and associative relations between pairs of variables. This
type of knowledge naturally arises from prior experimental and observational
data, among others. In addition, a novel search-operator is proposed to take
advantage of such prior knowledge. Experiments show that, using path beliefs
improves the learning of the skeleton, as well as the edge directions in the
network.; Comment: Accepted for publication to the 29th Conference on Uncertainty in
Artificial Intelligence (UAI-2013). The content of the paper is identical to
the published one, but the compiler at arXiv produces a 11 page long paper,
whereas the compiler we used produces a 10 page long paper (page limit for
the conference)

Link permanente para citações:

## Learning Gaussian Networks

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 27/02/2013
Português

Relevância na Pesquisa

37.28901%

#Computer Science - Artificial Intelligence#Computer Science - Learning#Statistics - Machine Learning

We describe algorithms for learning Bayesian networks from a combination of
user knowledge and statistical data. The algorithms have two components: a
scoring metric and a search procedure. The scoring metric takes a network
structure, statistical data, and a user's prior knowledge, and returns a score
proportional to the posterior probability of the network structure given the
data. The search procedure generates networks for evaluation by the scoring
metric. Previous work has concentrated on metrics for domains containing only
discrete variables, under the assumption that data represents a multinomial
sample. In this paper, we extend this work, developing scoring metrics for
domains containing all continuous variables or a mixture of discrete and
continuous variables, under the assumption that continuous data is sampled from
a multivariate normal distribution. Our work extends traditional statistical
approaches for identifying vanishing regression coefficients in that we
identify two important assumptions, called event equivalence and parameter
modularity, that when combined allow the construction of prior distributions
for multivariate normal parameters from a single prior Bayesian network
specified by a user.; Comment: Appears in Proceedings of the Tenth Conference on Uncertainty in
Artificial Intelligence (UAI1994)

Link permanente para citações:

## Binary Classifier Calibration: Bayesian Non-Parametric Approach

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 13/01/2014
Português

Relevância na Pesquisa

37.28901%

A set of probabilistic predictions is well calibrated if the events that are
predicted to occur with probability p do in fact occur about p fraction of the
time. Well calibrated predictions are particularly important when machine
learning models are used in decision analysis. This paper presents two new
non-parametric methods for calibrating outputs of binary classification models:
a method based on the Bayes optimal selection and a method based on the
Bayesian model averaging. The advantage of these methods is that they are
independent of the algorithm used to learn a predictive model, and they can be
applied in a post-processing step, after the model is learned. This makes them
applicable to a wide variety of machine learning models and methods. These
calibration methods, as well as other methods, are tested on a variety of
datasets in terms of both discrimination and calibration performance. The
results show the methods either outperform or are comparable in performance to
the state-of-the-art calibration methods.

Link permanente para citações:

## Using parameterized calculus questions for learning and assessment

Fonte: IEEE
Publicador: IEEE

Tipo: Conferência ou Objeto de Conferência

Português

Relevância na Pesquisa

37.28901%

We have implemented a Web application reusing questions from two computer systems, true/false questions from Project A and multiple choice questions from Project B. Our application implements a Bayesian user model for diagnosing student knowledge in the topics covered. In this article we propose the use of this system for both learning and assessment in a calculus course, encouraging the students to work during the semester without increasing the work load for teachers.

Link permanente para citações: