# A melhor ferramenta para a sua pesquisa, trabalho e TCC!

Página 1 dos resultados de 1256 itens digitais encontrados em 0.022 segundos

- Biblioteca Digital da Unicamp
- Universidade Carlos III de Madrid
- Universidade Cornell
- Institute of Mathematical Statistics
- Office for National Statistics
- London School of Economics and Political Science Research
- Centre for Economic Performance, London School of Economics and Political Science
- The Institute of Mathematical Statistics
- Wiley-Blackwell
- Maths, Stats & OR Network
- Department of Statistics, London School of Economics and Political Science
- London School of Economics and Political Science Thesis
- Mais Publicadores...

## Estudo sobre a aplicação de estatística bayesiana e método de máxima entropia em análise de dados; Study on application of bayesian statistics and method of maximun entropy in data analysis

Fonte: Biblioteca Digital da Unicamp
Publicador: Biblioteca Digital da Unicamp

Tipo: Dissertação de Mestrado
Formato: application/pdf

Publicado em 19/04/2007
Português

Relevância na Pesquisa

351.47312%

#Raios cosmicos#Chuveiros de raios cosmicos#Estatistica bayesiana#Metodo de entropia maxima#Cosmic rays#Cosmic rays showers#Bayesian statistics#Maximum entropy method

Neste trabalho são estudados os métodos de estatística bayesiana e máxima entropia na análise de dados. É feita uma revisão dos conceitos básicos e procedimentos que podem ser usados para in-ferência de distribuições de probabilidade. Os métodos são aplicados em algumas áreas de interesse, com especial atenção para os casos em que há pouca informação sobre o conjunto de dados. São apresentados algoritmos para a aplicação de tais métodos, bem como alguns exemplos detalhados em que espera-se servirem de auxílio aos interessados em aplicações em casos mais comuns de análise de dados; In this work, we study the methods of Bayesian Statistics and Maximum Entropy in data analysis. We present a review of basic concepts and procedures that can be used for inference of probability distributions. The methods are applied in some interesting fields, with special attention to the cases where there?s few information on set of data, which can be found in physics experiments such as high energies physics, astrophysics, among others. Algorithms are presented for the implementation of such methods, as well as some detailed examples where it is expected to help interested in applications in most common cases of data analysis

Link permanente para citações:

## Extremality in multivariate statistics

Fonte: Universidade Carlos III de Madrid
Publicador: Universidade Carlos III de Madrid

Tipo: Tese de Doutorado
Formato: application/pdf

Português

Relevância na Pesquisa

358.2358%

Multivariate order is a valuable tool for analyzing data properties and for extending univariate concepts based on order such as median, range, extremes, quantiles or order statistics to multivariate data. Generalizing such concepts to the multivariate case is not straightforward. While different ways of generalizing quantiles have been studied by Chaudhuri [10], a description of extensions of concepts such a median, range and quantiles to the multivariate framework has been provided by Barnett [3]. The key problem, however, in generalizing these concepts to several dimensions is the lack of a unique criterion for ordering multivariate observations. Over the last few decades, multivariate stochastic orders have also become a powerful means of comparing random vectors, especially in situations where the distributions are partially known. In particular, multivariate stochastic orders have a wide range of applications in portfolio theory. The thesis is motivated by the aspects mentioned above and its purpose is threefold. Firstly it introduces the multivariate extremality as a methodology that measures the farness of a point x with respect to a data cloud or a distribution function. We study the main properties of this new concept, as well as asymptotic results...

Link permanente para citações:

## Fractional Statistics in One-Dimension: View From An Exactly Solvable Model

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

351.47312%

One-dimensional fractional statistics is studied using the
Calogero-Sutherland model (CSM) which describes a system of non-relativistic
quantum particles interacting with inverse-square two-body potential on a ring.
The inverse-square exchange can be regarded as a pure statistical interaction
and this system can be mapped to an ideal gas obeying the fractional exclusion
and exchange statistics. The details of the exact calculations of the dynamical
correlation functions for this ideal system is presented in this paper. An
effective low-energy one-dimensional ``anyon'' model is constructed; and its
correlation functions are found to be in agreement with those in the CSM; and
this agreement provides an evidence for the equivalence of the first- and the
second-quantized construction of the 1D anyon model at least in the long
wave-length limit. Furthermore, the finite-size scaling applicable to the
conformally invariant systems is used to obtain the complete set of correlation
exponents for the CSM.; Comment: 42 Revtex pages + 5 separate postscript figures + Some minor
corrections

Link permanente para citações:

## Exact Dynamical Correlation Functions of Calogero-Sutherland Model and One-Dimensional Fractional Statistics

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 23/05/1994
Português

Relevância na Pesquisa

357.9979%

One-dimensional model of non-relativistic particles with inverse-square
interaction potential known as Calogero-Sutherland Model (CSM) is shown to
possess fractional statistics. Using the theory of Jack symmetric polynomial
the exact dynamical density-density correlation function and the one-particle
Green's function (hole propagator) at any rational interaction coupling
constant $\lambda = p/q$ are obtained and used to show clear evidences of the
fractional statistics. Motifs representing the eigenstates of the model are
also constructed and used to reveal the fractional {\it exclusion} statistics
(in the sense of Haldane's ``Generalized Pauli Exclusion Principle''). This
model is also endowed with a natural {\it exchange } statistics (1D analog of
2D braiding statistics) compatible with the {\it exclusion} statistics.
(Submitted to PRL on April 18, 1994); Comment: Revtex 11 pages, IASSNS-HEP-94/27 (April 18, 1994)

Link permanente para citações:

## The disorder problem for compound Poisson processes with exponential jumps

Fonte: Institute of Mathematical Statistics
Publicador: Institute of Mathematical Statistics

Tipo: Article; PeerReviewed
Formato: application/pdf

Publicado em //2005
Português

Relevância na Pesquisa

347.2932%

The problem of disorder seeks to determine a stopping time which is as close as possible to the unknown time of “disorder” when the observed process changes its probability characteristics. We give a partial answer to this question for some special cases of Lévy processes and present a complete solution of the Bayesian and variational problem for a compound Poisson process with exponential jumps. The method of proof is based on reducing the Bayesian problem to an integro-differential free-boundary problem where, in some cases, the smooth-fit principle breaks down and is replaced by the principle of continuous fit.

Link permanente para citações:

## Measuring subjective well-being for public policy

Fonte: Office for National Statistics
Publicador: Office for National Statistics

Tipo: Monograph; NonPeerReviewed
Formato: application/pdf

Publicado em /02/2011
Português

Relevância na Pesquisa

347.2932%

Link permanente para citações:

## Small-number statistics, common sense, and profit: challenges and non-challenges for hurricane forecasting

Fonte: London School of Economics and Political Science Research
Publicador: London School of Economics and Political Science Research

Tipo: Conference or Workshop Item; NonPeerReviewed
Formato: application/pdf

Publicado em //2011
Português

Relevância na Pesquisa

347.2932%

Link permanente para citações:

## Approximating conditional distribution functions using dimension reduction

Fonte: Institute of Mathematical Statistics
Publicador: Institute of Mathematical Statistics

Tipo: Article; PeerReviewed
Formato: application/pdf

Publicado em //2005
Português

Relevância na Pesquisa

347.2932%

Link permanente para citações:

## Inference in components of variance models with low replication

Fonte: Institute of Mathematical Statistics
Publicador: Institute of Mathematical Statistics

Tipo: Article; PeerReviewed
Formato: application/pdf

Publicado em /04/2003
Português

Relevância na Pesquisa

347.2932%

In components of variance models the data are viewed as arising through a sum of two random variables, representing between- and within-group variation, respectively. The former is generally interpreted as a group effect, and the latter as error. It is assumed that these variables are stochastically independent and that the distributions of the group effect and the error do not vary from one instance to another. If each group effect can be replicated a large number of times, then standard methods can be used to estimate the distributions of both the group effect and the error. This cannot be achieved without replication, however. How feasible is distribution estimation if it is not possible to replicate prolifically? Can the distributions of random effects and errors be estimated consistently from a small number of replications of each of a large number of noisy group effects, for example, in a nonparametric setting? Often extensive replication is practically infeasible, in particular, if inherently small numbers of individuals exhibit any given group effect. Yet it is quite unclear how to conduct inference in this case. We show that inference is possible, even if the number of replications is as small as 2. Two methods are proposed...

Link permanente para citações:

## The optimal timing of UI benefits: theory and evidence from Sweden

Fonte: Centre for Economic Performance, London School of Economics and Political Science
Publicador: Centre for Economic Performance, London School of Economics and Political Science

Tipo: Monograph; NonPeerReviewed
Formato: application/pdf

Publicado em /07/2015
Português

Relevância na Pesquisa

351.47312%

#HA Statistics#HN Social history and conditions. Social problems. Social reform#HV Social pathology. Social and public welfare. Criminology

This paper provides a simple, yet general framework to analyze the optimal time profile of benefits during the
unemployment spell. We derive simple sufficient-statistics formulae capturing the insurance value and incentive
costs of unemployment benefits paid at different times during the unemployment spell. Our general approach
allows to revisit and evaluate in a transparent way the separate arguments for inclining or declining profiles put
forward in the theoretical literature. We then estimate our sufficient statistics using administrative data on
unemployment, income and wealth in Sweden. First, we exploit duration-dependent kinks in the replacement rate
and find that the moral hazard cost of benefits is larger when paid earlier in the spell. Second, we find that the drop
in consumption determining the insurance value of benefits is large from the start of the spell, but further increases
throughout the spell. On average, savings and credit play a limited role in smoothing consumption. Our evidence
therefore indicates that the recent change from a flat to a declining benefit profile in Sweden has decreased
welfare. In fact, the local welfare gains push towards an increasing rather than decreasing benefit profile over the
spell.

Link permanente para citações:

## On the stochastic behaviour of optional processes up to random times

Fonte: Institute of Mathematical Statistics
Publicador: Institute of Mathematical Statistics

Tipo: Article; PeerReviewed
Formato: application/pdf

Publicado em 19/02/2015
Português

Relevância na Pesquisa

347.2932%

In this paper, a study of random times on filtered probability spaces is undertaken. The main message is that, as long as distributional properties of optional processes up to the random time are involved, there is no loss of generality in assuming that the random time is actually a randomised stopping time. This perspective has advantages in both the theoretical and practical study of optional processes up to random times. Applications are given to financial mathematics, as well as to the study of the stochastic behaviour of Brownian motion with drift up to its time of overall maximum as well as up to last-passage times over finite intervals. Furthermore, a novel proof of the Jeulin–Yor decomposition formula via Girsanov’s theorem is provided.

Link permanente para citações:

## Strict local martingales and bubbles

Fonte: Institute of Mathematical Statistics
Publicador: Institute of Mathematical Statistics

Tipo: Article; PeerReviewed
Formato: application/pdf

Publicado em //2015
Português

Relevância na Pesquisa

347.2932%

This paper deals with asset price bubbles modeled by strict local martingales. With any strict local martingale, one can associate a new measure, which is studied in detail in the first part of the paper. In the second part, we determine the “default term” apparent in risk-neutral option prices if the underlying stock exhibits a bubble modeled by a strict local martingale. Results for certain path dependent options and last passage time formulas are given.

Link permanente para citações:

## A statistical framework for the analysis of productivity and sustainable development

Fonte: Centre for Economic Performance, London School of Economics and Political Science
Publicador: Centre for Economic Performance, London School of Economics and Political Science

Tipo: Monograph; NonPeerReviewed
Formato: application/pdf

Publicado em /04/2004
Português

Relevância na Pesquisa

351.47312%

To analyse the consequences of the changing economic structure of the UK, we need a set of statistics broken down by industry that are consistent with the whole economy measures available from the national accounts. The theory of growth accounting then provides a framework in which the contribution of each industry to the national economy can be measured and assessed. This paper identifies the obstacles currently facing a researcher trying to implement this approach. It makes a number of recommendations for the improvement of official statistics.

Link permanente para citações:

## Factor modeling for high-dimensional time series: inference for the number of factors

Fonte: Institute of Mathematical Statistics
Publicador: Institute of Mathematical Statistics

Tipo: Article; PeerReviewed
Formato: application/pdf

Publicado em //2012
Português

Relevância na Pesquisa

347.2932%

This paper deals with the factor modeling for high-dimensional time series based on a dimension-reduction viewpoint. Under stationary settings, the inference is simple in the sense that both the number of factors and the factor loadings are estimated in terms of an eigenanalysis for a nonnegative definite matrix, and is therefore applicable when the dimension of time series is on the order of a few thousands. Asymptotic properties of the proposed method are investigated under two settings: (i) the sample size goes to infinity while the dimension of time series is fixed; and (ii) both the sample size and the dimension of time series go to infinity together. In particular, our estimators for zero-eigenvalues enjoy faster convergence (or slower divergence) rates, hence making the estimation for the number of factors easier. In particular, when the sample size and the dimension of time series go to infinity together, the estimators for the eigenvalues are no longer consistent. However, our estimator for the number of the factors, which is based on the ratios of the estimated eigenvalues, still works fine. Furthermore, this estimation shows the so-called “blessing of dimensionality” property in the sense that the performance of the estimation may improve when the dimension of time series increases. A two-step procedure is investigated when the factors are of different degrees of strength. Numerical illustration with both simulated and real data is also reported.

Link permanente para citações:

## Normalized least-squares estimation in time-varying ARCH models

Fonte: The Institute of Mathematical Statistics
Publicador: The Institute of Mathematical Statistics

Tipo: Article; PeerReviewed
Formato: application/pdf

Publicado em //2008
Português

Relevância na Pesquisa

347.2932%

We investigate the time-varying ARCH (tvARCH) process. It is shown that it can be used to describe the slow decay of the sample autocorrelations of the squared returns often observed in financial time series, which warrants the further study of parameter estimation methods for the model. Since the parameters are changing over time, a successful estimator needs to perform well for small samples. We propose a kernel normalized-least-squares (kernel-NLS) estimator which has a closed form, and thus outperforms the previously proposed kernel quasi-maximum likelihood (kernel-QML) estimator for small samples. The kernel-NLS estimator is simple, works under mild moment assumptions and avoids some of the parameter space restrictions imposed by the kernel-QML estimator. Theoretical evidence shows that the kernel-NLS estimator has the same rate of convergence as the kernel-QML estimator. Due to the kernel-NLS estimator’s ease of computation, computationally intensive procedures can be used. A prediction-based cross-validation method is proposed for selecting the bandwidth of the kernel-NLS estimator. Also, we use a residual-based bootstrap scheme to bootstrap the tvARCH process. The bootstrap sample is used to obtain pointwise confidence intervals for the kernel-NLS estimator. It is shown that distributions of the estimator using the bootstrap and the “true” tvARCH estimator asymptotically coincide. We illustrate our estimation method on a variety of currency exchange and stock index data for which we obtain both good fits to the data and accurate forecasts.

Link permanente para citações:

## Denis Sargan: some perspectives

Fonte: London School of Economics and Political Science Research
Publicador: London School of Economics and Political Science Research

Tipo: Article; PeerReviewed
Formato: application/pdf

Publicado em /06/2003
Português

Relevância na Pesquisa

347.2932%

We attempt to present Denis Sargan’s work in some kind of historical perspective, in two ways. First, we discuss some previous members of the Tooke Chair of Economic Science and Statistics, which was founded in 1859 and which Sargan held. Second, we discuss one of his articles “Asymptotic Theory and Large Models” in relation to modern preoccupations with semiparametric econometrics.

Link permanente para citações:

## The probability of identification: applying ideas from forensic statistics to disclosure risk assessment

Fonte: Wiley-Blackwell
Publicador: Wiley-Blackwell

Tipo: Article; PeerReviewed
Formato: application/pdf

Publicado em //2007
Português

Relevância na Pesquisa

451.47312%

The paper establishes a correspondence between statistical disclosure control and forensic statistics regarding their common use of the concept of ‘probability of identification’. The paper then seeks to investigate what lessons for disclosure control can be learnt from the forensic identification literature. The main lesson that is considered is that disclosure risk assessment cannot, in general, ignore the search method that is employed by an intruder seeking to achieve disclosure. The effects of using several search methods are considered. Through consideration of the plausibility of assumptions and ‘worst case’ approaches, the paper suggests how the impact of search method can be handled. The paper focuses on foundations of disclosure risk assessment, providing some justification for some modelling assumptions underlying some existing record level measures of disclosure risk. The paper illustrates the effects of using various search methods in a numerical example based on microdata from a sample from the 2001 UK census.

Link permanente para citações:

## Enhancing students’ engagement through effective feedback, assessment and engaging activities

Fonte: Maths, Stats & OR Network
Publicador: Maths, Stats & OR Network

Tipo: Article; PeerReviewed
Formato: application/pdf

Publicado em //2011
Português

Relevância na Pesquisa

354.0454%

This paper is about students’ perceptions of mathematics and statistics and their impact on students’ engagement, enthusiasm and academic self-efficacy.
I will discuss the strategies I developed to improve learning and teaching in statistics and mathematics service course classes, consisting of 15 students each, some of which also worked extremely well in my lectures to large audiences of about 350 students.
I would argue that such an approach could not only enhance students’ perceptions of the subjects and their engagement in classes/lectures but also promote critical thinking, independent learning, reasoning and several transferable skills associated with university education.
I will share the outcome of my teaching approach which not only fulfilled my initial expectations but far surpassed them. It increased students’ engagement and their enthusiasm which improved their performance in class activities and coursework. Furthermore, it improved students’ perceptions and attitudes to mathematics and statistics as reflected in their feedback. I have included some of their comments to highlight the impact a teaching approach can have on students.

Link permanente para citações:

## Nonlinear time series modelling of highly fluctuating biological population over space - main results

Fonte: Department of Statistics, London School of Economics and Political Science
Publicador: Department of Statistics, London School of Economics and Political Science

Tipo: Monograph; NonPeerReviewed
Formato: application/pdf

Publicado em /01/2002
Português

Relevância na Pesquisa

451.47312%

This grant was to support research into nonlinear dynamics in space and time of highly variable populations. The project started in February 1999 at the University of Kent at Canterbury. Due to the change of employment of both Tong and Yao, the grant was transferred to the London School of Economics and Political Science in January 2000. Dr Wenyang Zhang was appointed as a research officer at the beginning and quited after 22 months. Dr Georgios Tsiotas was appointed as a replacement for Dr Zhang for a period of 11 months. Dr Zhang now holds a lectureship in Statistics at the University of Kent at Canterbury. Together with our collaborators, we have finished over 30 papers on the topics related to the project, and 26 of them have already been published in refereed journals. We have presented our results at international conferences/workshops on 12 occasions since 1999.

Link permanente para citações:

## Randomised and L1-penalty approaches to segmentation in time series and regression models

Fonte: London School of Economics and Political Science Thesis
Publicador: London School of Economics and Political Science Thesis

Tipo: Thesis; NonPeerReviewed
Formato: application/pdf

Publicado em /08/2014
Português

Relevância na Pesquisa

351.47312%

It is a common approach in statistics to assume that the parameters of a stochastic model change. The simplest model involves parameters than can be exactly or approximately piecewise constant. In such a model, the aim is the posteriori detection of the number and location in time of the changes in the parameters. This thesis develops segmentation methods for non-stationary time series and regression models using randomised methods or methods that involve L1 penalties which force the coefficients in a regression model to be exactly zero. Randomised techniques are not commonly found in nonparametric statistics, whereas L1 methods draw heavily from the variable selection literature. Considering these two categories together, apart from other contributions, enables a comparison between them by pointing out strengths and weaknesses. This is achieved by organising the thesis into three main parts.
First, we propose a new technique for detecting the number and locations of the change-points in the second-order structure of a time series. The core of the segmentation procedure is the Wild Binary Segmentation method (WBS) of Fryzlewicz (2014), a technique which involves a certain randomised mechanism. The advantage of WBS over the standard Binary Segmentation lies in its localisation feature...

Link permanente para citações: