Página 19 dos resultados de 1256 itens digitais encontrados em 0.032 segundos

Wealth inequality: a survey

Cowell, Frank A.; Van Kerm , Philippe
Fonte: Wiley Publicador: Wiley
Tipo: Article; PeerReviewed Formato: application/pdf
Publicado em //2015 Português
Relevância na Pesquisa
239.31336%

Random rotation ensembles

Blaser, Rico; Fryzlewicz, Piotr
Fonte: Microtome Publishing Publicador: Microtome Publishing
Tipo: Article; PeerReviewed Formato: application/pdf
Publicado em //2015 Português
Relevância na Pesquisa
239.31336%
In machine learning, ensemble methods combine the predictions of multiple base learners to construct more accurate aggregate predictions. Established supervised learning algorithms inject randomness into the construction of the individual base learners in an effort to promote diversity within the resulting ensembles. An undesirable side effect of this approach is that it generally also reduces the accuracy of the base learners. In this paper, we introduce a method that is simple to implement yet general and effective in improving ensemble diversity with only modest impact on the accuracy of the individual base learners. By randomly rotating the feature space prior to inducing the base learners, we achieve favorable aggregate predictions on standard data sets compared to state of the art ensemble methods, most notably for tree-based ensembles, which are particularly sensitive to rotation.

Extending procedural justice theory: a Fiducia report on the design of new survey indicators

Jackson, Jonathan; Bradford, Ben; Hough, Mike; Carrillo, Stephany
Fonte: European Commission Publicador: European Commission
Tipo: Monograph; NonPeerReviewed Formato: application/pdf
Publicado em //2014 Português
Relevância na Pesquisa
239.31336%
A key goal of the Fiducia project is to extend procedural justice theory in three new directions. The first relates to public perceptions of new forms of criminal behaviour. The second concerns the applicability of procedural justice theory to these new forms of behaviour. The third considers the notion that legitimacy crosses national borders. In this Fiducia report we motivate and outline core theory, concepts and measures. We also present the questionnaire to be fielded in seven European countries.

Partnership formation and dissolution over the life course: applying sequence analysis and event history analysis in the study of recurrent events

Helske, Satu; Steele, Fiona; Kokko, Katja; Räikkönen, Eija; Eerola, Mervi
Fonte: Society for Longitudinal and Life Course Studies Publicador: Society for Longitudinal and Life Course Studies
Tipo: Article; PeerReviewed Formato: application/pdf
Publicado em //2015 Português
Relevância na Pesquisa
239.31336%
We present two types of approach to the analysis of recurrent events for discretely measured data, and show how these methods can complement each other when analysing co-residential partnership histories. Sequence analysis is a descriptive tool that gives an overall picture of the data and helps to find typical and atypical patterns in histories. Event history analysis is used to make conclusions about the effects of covariates on the timing and duration of the partnerships. As a substantive question, we studied how family background and childhood socio-emotional characteristics were related to later partnership formation and stability in a Finnish cohort born in 1959. We found that high self-control of emotions at age 8 was related to a lower risk of partnership dissolution and for women a lower probability of repartnering. Child-centred parenting practices during childhood were related to a lower risk of dissolution for women. Socially active boys were faster at forming partnerships as men.

Joint modelling compared with two stage methods for analysing longitudinal data and prospective outcomes: a simulation study of childhood growth and BP

Sayers, A.; Heron, J.; Smith, A.; Macdonald-Wallis, C.; Gilthorpe, M.; Steele, F.; Tilling, K.
Fonte: SAGE Publications Publicador: SAGE Publications
Tipo: Article; PeerReviewed Formato: application/pdf
Publicado em /09/2014 Português
Relevância na Pesquisa
239.31336%
There is a growing debate with regards to the appropriate methods of analysis of growth trajectories and their association with prospective dependent outcomes. Using the example of childhood growth and adult BP, we conducted an extensive simulation study to explore four two-stage and two joint modelling methods, and compared their bias and coverage in estimation of the (unconditional) association between birth length and later BP, and the association between growth rate and later BP (conditional on birth length). We show that the two-stage method of using multilevel models to estimate growth parameters and relating these to outcome gives unbiased estimates of the conditional associations between growth and outcome. Using simulations, we demonstrate that the simple methods resulted in bias in the presence of measurement error, as did the two-stage multilevel method when looking at the total (unconditional) association of birth length with outcome. The two joint modelling methods gave unbiased results, but using the re-inflated residuals led to undercoverage of the confidence intervals. We conclude that either joint modelling or the simpler two-stage multilevel approach can be used to estimate conditional associations between growth and later outcomes...

Cross-classified sampling: some estimation theory

Skinner, C. J.
Fonte: Elsevier Publicador: Elsevier
Tipo: Article; PeerReviewed Formato: application/pdf
Publicado em 06/06/2015 Português
Relevância na Pesquisa
239.31336%
For a population represented as a two-way array, we consider sampling via the product of independent row and column samples. Theory is presented for the estimation of a population total under alternative methods of sampling the rows and columns.

Large capital inflows, sectoral allocation and economic performance

Benigno, Gianluca; Converse, Nathan; Fornaro, Luca
Fonte: Centre for Economic Performance, London School of Economics and Political Science Publicador: Centre for Economic Performance, London School of Economics and Political Science
Tipo: Monograph; NonPeerReviewed Formato: application/pdf
Publicado em /04/2015 Português
Relevância na Pesquisa
239.31336%
This paper describes the stylized facts characterizing periods of exceptionally large capital inflows in a sample of 70 middle- and high-income countries over the last 35 years. We identify 155 episodes of large capital inflows and find that these events are typically accompanied by an economic boom and followed by a slump. Moreover, during episodes of large capital inflows capital and labor shift out of the manufacturing sector, especially if the inflows begin during a period of low international interest rates. However, accumulating reserves during the period in which capital inflows are unusually large appears to limit the extent of labor reallocation. Larger credit booms and capital inflows during the episodes we identify increase the probability of a sudden stop occurring during or immediately after the episode. In addition, the severity of the post-inflows recession is significantly related to the extent of labor reallocation during the boom, with a stronger shift of labor out of manufacturing during the inflows episode associated with a sharper contraction in the aftermath of the episode.

Stability of the exponential utility maximization problem with respect to preferences

Xing, Hao
Fonte: Wiley-Blackwell Publicador: Wiley-Blackwell
Tipo: Article; PeerReviewed Formato: application/pdf
Publicado em 24/03/2014 Português
Relevância na Pesquisa
239.31336%
This paper studies stability of the exponential utility maximization when there are small variations on agent's utility function. Two settings are considered. First, in a general semi-martingale model where random endowments are present, a sequence of utilities depned on R converges to the exponential utility. Under a uniform condition on their marginal utilities, convergence of value functions, optimal payouts and optimal investment strategies are obtained, their rate of con-vergence are also determined. Stability of utility-based pricing is studied as an application. Second, a sequence of utilities depened on R+ converges to the exponential utility after shifting and scaling. Their associated optimal strategies, after appropriate scaling, converge to the optimal strategy for the exponential hedging problem. This complements Theorem 3.2 in M. Nutz, Probab. Theory Relat. Fields, 152, 2012, which establishes the convergence for a sequence of power utilities.

Matching a distribution by matching quantiles estimation

Sgouropoulos, Nikolaos; Yao, Qiwei; Yastremiz, Claudia
Fonte: Taylor & Francis Publicador: Taylor & Francis
Tipo: Article; PeerReviewed Formato: application/pdf
Publicado em 25/05/2015 Português
Relevância na Pesquisa
239.31336%
Motivated by the problem of selecting representative portfolios for backtesting counterparty credit risks, we propose a matching quantiles estimation (MQE) method for matching a target distribution by that of a linear combination of a set of random variables. An iterative procedure based on the ordinary least squares estimation (OLS) is proposed to compute MQE. MQE can be easily modified by adding a LASSO penalty term if a sparse representation is desired, or by restricting the matching within certain range of quantiles to match a part of the target distribution. The convergence of the algorithm and the asymptotic properties of the estimation, both with or without LASSO, are established. A measure and an associated statistical test are proposed to assess the goodness-of-match. The finite sample properties are illustrated by simulation. An application in selecting a counterparty representative portfolio with a real data set is reported. The proposed MQE also finds applications in portfolio tracking, which demonstrates the usefulness of combining MQE with LASSO.

Gaussian maximum likelihood estimation for ARMA models I: time series

Yao, Qiwei; Brockwell, Peter J
Fonte: Wiley-Blackwell Publicador: Wiley-Blackwell
Tipo: Article; PeerReviewed Formato: application/pdf
Publicado em /11/2006 Português
Relevância na Pesquisa
239.31336%
We provide a direct proof for consistency and asymptotic normality of Gaussian maximum likelihood estimators for causal and invertible ARMA time series models, which were initially established by Hannan (1973) via the asymptotic properties of a Whittle's estimator. This also paves the way to establish a similar results for spatial processes presented in the follow-up paper Yao and Brockwell (2001).

Explaining the behavior of joint and marginal Monte Carlo estimators in latent variable models with independence assumptions

Vitoratou, Silia; Ntzoufras, Ioannis; Moustaki, Irini
Fonte: Springer Netherlands Publicador: Springer Netherlands
Tipo: Article; PeerReviewed Formato: application/pdf
Publicado em 24/07/2014 Português
Relevância na Pesquisa
239.31336%
In latent variable models parameter estimation can be implemented by using the joint or the marginal likelihood, based on independence or conditional independence assumptions. The same dilemma occurs within the Bayesian framework with respect to the estimation of the Bayesian marginal (or integrated) likelihood, which is the main tool for model comparison and averaging. In most cases, the Bayesian marginal likelihood is a high dimensional integral that cannot be computed analytically and a plethora of methods based on Monte Carlo integration (MCI) are used for its estimation. In this work, it is shown that the joint MCI approach makes subtle use of the properties of the adopted model, leading to increased error and bias in finite settings. The sources and the components of the error associated with estimators under the two approaches are identified here and provided in exact forms. Additionally, the effect of the sample covariation on the Monte Carlo estimators is examined. In particular, even under independence assumptions the sample covariance will be close to (but not exactly) zero which surprisingly has a severe effect on the estimated values and their variability. To address this problem, an index of the sample's divergence from independence is introduced as a multivariate extension of covariance. The implications addressed here are important in the majority of practical problems appearing in Bayesian inference of multi-parameter models with analogous structures.

Double-normal pairs in the plane and on the sphere

Pach, János; Swanepoel, Konrad J.
Fonte: Springer Publicador: Springer
Tipo: Article; PeerReviewed Formato: application/pdf
Publicado em //2014 Português
Relevância na Pesquisa
239.31336%
A double-normal pair of a finite set S of points from Euclidean space is a pair of points {p p,q q} from S such that S lies in the closed strip bounded by the hyperplanes through p p and q q that are perpendicular to p pq q . A double-normal pair p pq q is strict if S∖{p p,q q} lies in the open strip. We answer a question of Martini and Soltan (2006) by showing that a set of n≥3 points in the plane has at most 3⌊n/2⌋ double-normal pairs. This bound is sharp for each n≥3 . In a companion paper, we have asymptotically determined this maximum for points in R 3 . Here we show that if the set lies on some 2 -sphere, it has at most 17n/4−6 double-normal pairs. This bound is attained for infinitely many values of n . We also establish tight bounds for the maximum number of strict double-normal pairs in a set of n points in the plane and on the sphere.

Finite sample improvement in statistical inference with I(1) processes

Marinucci, D; Robinson, Peter M
Fonte: The London School of Economics and Political Science, Suntory and Toyota International Centres for Economics and Related Disciplines Publicador: The London School of Economics and Political Science, Suntory and Toyota International Centres for Economics and Related Disciplines
Tipo: Monograph; NonPeerReviewed Formato: application/pdf
Publicado em /07/2001 Português
Relevância na Pesquisa
239.31336%
Robinson and Marinucci (1998) investigated the asymptotic behaviour of a narrow-band semiparametric procedure termed Frequency Domain Least Squares (FDLS) in the broad context of fractional cointegration analysis. Here we restrict to the standard case when the data are I(1) and the cointegrating errors are I(0), proving that modifications of the Fully-Modified Ordinary Least Squares (FM-OLS) procedure of Phillips and Hansen (1990) which use the FDLS idea have the same asymptotically desirable properties as FM-OLS, and, on the basis of a Monte Carlo study, find evidence that they have superior finite-sample properties; the new procedures are also shown to compare satisfactorily with parametric estimates.

Estimating and forecasting with a dynamic spatial panel data model

Baltagi, Badi H.; Fingleton, Bernard; Pirotte, Alain
Fonte: Spatial Economics Research Centre, London School of Economics and Political Science Publicador: Spatial Economics Research Centre, London School of Economics and Political Science
Tipo: Monograph; NonPeerReviewed Formato: application/pdf
Publicado em /11/2011 Português
Relevância na Pesquisa
239.31336%
This paper focuses on the estimation and predictive performance of several estimators for the dynamic and autoregressive spatial lag panel data model with spatially correlated disturbances. In the spirit of Arellano and Bond (1991) and Mutl (2006), a dynamic spatial GMM estimator is proposed based on Kapoor, Kelejian and Prucha (2007) for the Spatial AutoRegressive (SAR) error model. The main idea is to mix non-spatial and spatial instruments to obtain consistent estimates of the parameters. Then, a linear predictor of this spatial dynamic model is derived. Using Monte Carlo simulations, we compare the performance of the GMM spatial estimator to that of spatial and non-spatial estimators and illustrate our approach with an application to new economic geography.

Sparse modelling and estimation for nonstationary time series and high-dimensional data

Cho, Haeran
Fonte: London School of Economics and Political Science Thesis Publicador: London School of Economics and Political Science Thesis
Tipo: Thesis; NonPeerReviewed Formato: application/pdf
Publicado em /09/2010 Português
Relevância na Pesquisa
239.31336%
Sparse modelling has attracted great attention as an efficient way of handling statistical problems in high dimensions. This thesis considers sparse modelling and estimation in a selection of problems such as breakpoint detection in nonstationary time series, nonparametric regression using piecewise constant functions and variable selection in high-dimensional linear regression. We first propose a method for detecting breakpoints in the secondorder structure of piecewise stationary time series, assuming that those structural breakpoints are sufficiently scattered over time. Our choice of time series model is the locally stationary wavelet process (Nason et al., 2000), under which the entire second-order structure of a time series is described by wavelet-based local periodogram sequences. As the initial stage of breakpoint detection, we apply a binary segmentation procedure to wavelet periodogram sequences at each scale separately, which is followed by within-scale and across-scales postprocessing steps. We show that the combined methodology achieves consistent estimation of the breakpoints in terms of their total number and locations, and investigate its practical performance using both simulated and real data. Next...

Robust asset allocation under model ambiguity

Tobelem-Foldvari, Sandrine
Fonte: London School of Economics and Political Science Thesis Publicador: London School of Economics and Political Science Thesis
Tipo: Thesis; NonPeerReviewed Formato: application/pdf
Publicado em /09/2010 Português
Relevância na Pesquisa
239.31336%
A decision maker, when facing a decision problem, often considers several models to represent the outcomes of the decision variable considered. More often than not, the decision maker does not trust fully any of those models and hence displays ambiguity or model uncertainty aversion. In this PhD thesis, focus is given to the specific case of asset allocation problem under ambiguity faced by financial investors. The aim is not to find an optimal solution for the investor, but rather come up with a general methodology that can be applied in particular to the asset allocation problem and allows the investor to find a tractable, easy to compute solution for this problem, taking into account ambiguity. This PhD thesis is structured as follows: First, some classical and widely used models to represent asset returns are presented. It is shown that the performance of the asset portfolios built using those single models is very volatile. No model performs better than the others consistently over the period considered, which gives empirical evidence that: no model can be fully trusted over the long run and that several models are needed to achieve the best asset allocation possible. Therefore, the classical portfolio theory must be adapted to take into account ambiguity or model uncertainty. Many authors have in an early stage attempted to include ambiguity aversion in the asset allocation problem. A review of the literature is studied to outline the main models proposed. However...

Stochastic models and methods for the assessment of earthquake risk in insurance

Jiménez-Huerta, Diego
Fonte: London School of Economics and Political Science Thesis Publicador: London School of Economics and Political Science Thesis
Tipo: Thesis; NonPeerReviewed Formato: application/pdf
Publicado em /05/2009 Português
Relevância na Pesquisa
239.31336%
The problem of earthquake risk assessment and management in insurance is a challenging one at the interface of geophysics, engineering seismology, stochastics, insurance mathematics and economics. In this work, I propose stochastic models and methods for the assessment of earthquake risk from an insurer's point of view, where the aim is not to address problems in the financial mathematics and economics of risk selection, pricing, portfolio management, and risk transfer strategies such as reinsurance and securitisation, but to enable the latter through the characterisation of the foundation of any risk management consideration in insurance: the distribution of losses over a period of time for a portfolio of risks. Insurance losses are assumed to be generated by a loss process that is in turn governed by an earthquake process, a point process marked with the earthquake's hypocentre and magnitude, and a conditional loss distribution for an insurance portfolio, governing the loss size given the hypocentre and magnitude of the earthquake, and the physical characteristics of the portfolio as described in the individual policy records. From the modeling perspective, I examine the (non-trivial) minutiae around the infrastructure underpinning the loss process. A novel model of the earthquake process...

The methodology of flowgraph models

Ren, Yu
Fonte: London School of Economics and Political Science Thesis Publicador: London School of Economics and Political Science Thesis
Tipo: Thesis; NonPeerReviewed Formato: application/pdf
Publicado em 08/12/2011 Português
Relevância na Pesquisa
239.31336%
Flowgraph models are directed graph models for describing the dynamic changes in a stochastic process. They are one class of multistate models that are applied to analyse time-to-event data. The main motivation of the flowgraph models is to determine the distribution of the total waiting times until an event of interest occurs in a stochastic process that progresses through various states. This thesis applies the methodology of flowgraph models to the study of Markov and SemiMarkov processes. The underlying approach of the thesis is that the access to the moment generating function (MGF) and cumulant generating function (CGF), provided by Mason’s rule enables us to use the Method of Moments (MM) which depends on moments and cumulant. We give a new derivation of the Mason’s rule to compute the total waiting MGF based on the internode transition matrix of a flowgraph. Next, we demonstrate methods to determine and approximate the distribution of total waiting time based on the inversion of the MGF, including an alternative approach using the Pad´e approximation of the MGF, which always yields a closed form density. For parameter estimation, we extend the Expectation-Maximization (EM) algorithm to estimate parameters in the mixture of negative weight exponential density. Our second contribution is to develop a bias correction method in the Method of Moments (BCMM). By investigating methods for tail area approximation...

Essays in modelling and estimatingvValue-at-risk

Yan, Yang
Fonte: London School of Economics and Political Science Thesis Publicador: London School of Economics and Political Science Thesis
Tipo: Thesis; NonPeerReviewed Formato: application/pdf
Publicado em /09/2014 Português
Relevância na Pesquisa
239.31336%
The thesis concerns semiparametric modelling and forecasting Value-at-Risk models, and the applications of these in Önancial data. Two general classes of semiparametric VaR models are proposed, the Örst method is introduced by deÖning some e¢ cient estimators of the risk measures in a semiparametric GARCH model through moment constraints and a quantile estimator based on inverting an empirical likelihood weighted distribution. It is found that the new quantile estimator is uniformly more e¢ cient than the simple empirical quantile and a quantile estimator based on normalized residuals. At the same time, the e¢ ciency gain in error quantile estimation hinges on the e¢ ciency of estimators of the variance parameters. We show that the same conclusion applies to the estimation of conditional Expected Shortfall. The second model proposes a new method to forecast one-period-ahead Value-at-Risk (VaR) in general ARCH(1) models with possibly heavy-tailed errors. The proposed method is based on least square estimation for the log-transformed model. This method imposes weak moment conditions on the errors. The asymptotic distribution also accounts for the parameter uncertainty in volatility estimation. We test our models against some conventional VaR forecasting methods...

Brownian excursions in mathematical finance

Zhang, You You
Fonte: London School of Economics and Political Science Thesis Publicador: London School of Economics and Political Science Thesis
Tipo: Thesis; NonPeerReviewed Formato: text
Publicado em /12/2014 Português
Relevância na Pesquisa
239.31336%
The Brownian excursion is defined as a standard Brownian motion conditioned on starting and ending at zero and staying positive in between. The first part of the thesis deals with functionals of the Brownian excursion, including first hitting time, last passage time, maximum and the time it is achieved. Our original contribution to knowledge is the derivation of the joint probability of the maximum and the time it is achieved. We include a financial application of our probabilistic results on Parisian default risk of zero-coupon bonds. In the second part of the thesis the Parisian, occupation and local time of a drifted Brownian motion is considered, using a two-state semi-Markov process. New versions of Parisian options are introduced based on the probabilistic results and explicit formulae for their prices are presented in form of Laplace transforms. The main focus in the last part of the thesis is on the joint probability of Parisian and hitting time of Brownian motion. The difficulty here lies in distinguishing between different scenarios of the sample path. Results are achieved by the use of infinitesimal generators on perturbed Brownian motion and applied to innovative equity exotics as generalizations of the Barrier and Parisian option with the advantage of being highly adaptable to investors’ beliefs in the market.