-
Acknowledgments.
People who the author would like to thank for their assistance in the creation of their journal are mentioned.
-
Background Material.
The author determines the "order of magnitude" of a "sequence," and the "magnitude," which is defined by considering the behavior of the sequence as the sample size n increases. He elaborates their usage of the "big O" and the "small o" for the positive integer n. The order of probability is also discussed.
-
Chapter 1: Introduction.
A preface for the 2008 issue of "Large Dimensional Factor Analysis" is presented.
-
Chapter 2: Factor Models.
Chapter 2 of the book "Large Dimensional Factor Analysis," by Jushan Bai and Serena Ng is presented. The chapter is titled "Factor Models." It discusses how to set up a mathematical notation and how to distinguish a static and a dynamic factor models. It also mentions the idiosyncratic error and the factor loadings of factor analysis.
-
Chapter 3: Principal Components and Related Identities.
Chapter 3 of the book "Large Dimensional Factor Analysis," by Jushan Bai and Serena Ng is presented. The chapter is titled "Principal Components and Related Identities." It discusses that the method of asymptotic principal components was used by Connor and Korajzcyk. It also mentions that the estimated factors and loadings of principal components were obtained through normalization of matrix and concentrating on the eigenvectors of largest eigenvalues of N x N matrix.
-
Chapter 4: Theory: Stationary Data.
Chapter 4 of the book "Large Dimensional Factor Analysis," by Jushan Bai and Serena Ng is presented. The chapter is titled "Theory: Stationary Data." It discusses assumptions of stationary factors which contain factors and loadings of factor models. It also mentions the covariance stationarity of the largest eigenvalue with serial and cross-sectional correlations.
-
Chapter 5: Applications.
Chapter 5 of the book "Large Dimensional Factor Analysis," by Jushan Bai and Serena Ng is presented. The chapter is titled "Applications." It discusses that factor analysis contains number of variables and estimated factors which are used as predictors, instruments over observed variables,and measurements for testing the validity of observed proxies. It also states that the applications of factor analysis include factor-augmented regressions (FAR) and linear factor augmented regressions.
-
Chapter 6: Panel Regression Models with a Factor Structure in the Errors.
Chapter 6 of the book "Large Dimensional Factor Analysis," by Jushan Bai and Serena Ng is presented. The chapter is titled "Panel Regression Models with a Factor Structure in the Errors." It discusses different regression models including fixed effect model with least squares matrix, pure factor model with a symmetric matrix, and final least squares estimator with eigenvectors.
-
Chapter 7: Theory: Non-Stationary Data.
Chapter 7 of the book "Large Dimensional Factor Analysis," by Jushan Bai and Serena Ng is presented. The chapter is titled "Theory: Non-stationary Data." It discusses that non-stationary factors depend on the factors and the errors of asymptotic analysis. It also mentions that common factors with stochastic trend are all non-stationary data.
-
Chapter 8: How Precise are the Factor Estimates?
Chapter 8 of the book "Large Dimensional Factor Analysis," by Jushan Bai and Serena Ng is presented. The chapter is titled "How Precise are the Factor Estimates?" It discusses the factor estimates of principal components. It mentions that principal components estimator depends on unweighted objective function which lessen the sum of squared residuals. It also states that the precision of factor estimates is shown by a monte carlo experiment.
-
Chapter 9: Conclusion.
Chapter 9 of the book "Large Dimensional Factor Analysis," by Jushan Bai and Serena Ng is presented. The chapter is titled "Conclusion." It gives an overview of the theoretical results regarding the use of principal components as estimated factors. It also states that the estimator measures well the factor space of the required estimation.
-
Computational Considerations.
The article discusses the use of "binning method," transforms and exploit parallelism in the computational burden connected with the kernel methods. It states that binning is an approximate method wherein one first "pre-bins" data on an equally spaced mesh and the used an appropriate modified estimator to the binned data. The use of fast Fourier transforms (FFT) restricts estimation to a grid of points to further enhance computational speed.
-
Conclusions.
The article summarizes all the articles discussed in the January 2008 issue of "Now." The authors states that by demonstrating a scope of semiparametric and nonparametric models crossing a variety of application areas, they hope that they have encouraged interested readers to try some of the discussed methods in their certain problem domains.
-
Conditional Density Estimation.
The article focuses on the conditional density functions (CDF). It states that though they are seldomly modeled directly in parametric settings and have received even less attention in kernel settings, conditional density functions underlie a lot of popular statistical objects of interest. They are very useful in a range of tasks, including calculating the conditional density function, modeling data count or modeling conditional quantiles through estimation of a conditional CDF.
-
Consistent Hypothesis Testing.
The article focuses on hypothesis testing. It states that there are existing parametric methods for testing for correct specification of parametric models, tests for equality of distributions and equality of regression functions. Parametric tests commonly require the analyst to determine the set of parametric options for which the null hypothesis will be rejected. If the null is false and there survive alternative models that the test can not detect, then the test is said to be "inconsistent."
-
Density and Probability Function Estimation.
The article focuses on frequency probability estimator and kernel density estimation. It discusses "generalized product kernels," kernels for categorical data, data-driven bandwidth selection and histograms. It states that the histogram is a non-smooth nonparametric method applied to estimate the probability density function (PDF) of a continuous variable. The frequency probability estimator is a non-smooth nonparametric approach used to calculate probabilities of discrete events.
-
Introduction.
The article discusses the non-parametric methods, and the previous and recent studies about it. It states that non-parametric methods are statistical strategy that do not require a researcher to specify functional forms for objects being estimated. Instead, the data itself changes the resulting model in a certain manner. The appeal of non-parametric method is that they relax the parametric assumptions inflicted on the data generating process and let the data define an appropriate model.
-
Large Dimensional Factor Analysis.
Econometric analysis of large dimensional factor models has been a heavily researched topic in recent years. This review surveys the main theoretical results that relate to static factor models or dynamic factor models that can be cast in a static framework. Among the topics covered are how to determine the number of factors, how to conduct inference when estimated factors are used in regressions, how to assess the adequacy of observed variables as proxies for latent factors, how to exploit the estimated factors to test unit root tests and common trends, and how to estimate panel cointegration models. The fundamental result that justifies these analyses is that the method of asymptotic principal components consistently estimates the true factor space. We use simulations to better understand the conditions that can affect the precision of the factor estimates.ABSTRACT FROM AUTHORCopyright of Foundations &Trends in Econometrics is the property of Now Publishers and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract.
-
Nonparametric Econometrics: A Primer.
This review is a primer for those who wish to familiarize themselves with nonparametric econometrics. Though the underlying theory for many of these methods can be daunting for some practitioners, this article will demonstrate how a range of nonparametric methods can in fact be deployed in a fairly straightforward manner. Rather than aiming for encyclopedic coverage of the field, we shall restrict attention to a set of touchstone topics while making liberal use of examples for illustrative purposes. We will emphasize settings in which the user may wish to model a dataset comprised of continuous, discrete, or categorical data (nominal or ordinal), or any combination thereof. We shall also consider recent developments in which some of the variables involved may in fact be irrelevant, which alters the behavior of the estimators and optimal bandwidths in a manner that deviates substantially from conventional approaches.ABSTRACT FROM AUTHORCopyright of Foundations &Trends in Econometrics is the property of Now Publishers and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract.
-
Notations and Acronyms.
The article lists some notation and associated definitions used in the articles published in the January 2008 issue of "Now," including f(x), F(x) and g(x).
-
Panel Data Models.
The article focuses on the nonparametric and semiparametric estimation of panel data models. It states that data panel is a collection of N individual time series that may be short, denoted by "small T," or long, denoted by "large T." However, when T is large and the N is small, there exists a long time series for each individual unit and in such cases, estimating a panel data model can be avoided by simply calculating the T individual time series available for each.
-
References.
The sources cited within this issue are presented including "Statistical Inference in Factor Analysis," by T. W. Anderson and H. Rubin, "Panel Data Models with Interactive Fixed Effects," by J. Bai, and "Determining the Number of Factors in Approximate Factor Models," by J. Bai and S, Ng.
-
References.
References for the articles published in the January 2008 issue of "Now" are presented.
-
Regression.
The article discusses the local constant kernel regression and the local polynomial kernel regression. It states that one of the most well-known methods for nonparametric kernel regression is known as the "Nadaraya-Watson" estimator or the "local constant" estimator. It mentions that one feature of the local polynomial kernel regression is that it directly delivers estimators of the mean and response, which was unlike the local constant estimator.
-
Semiparametric Regression.
The article focuses on the semiprametric methods. It states that semiparametric approach constitute some of the more popular methods for flexible estimation. Semiparametric models are formed by uniting parametric and nonparametric models in a certain manner. Semiparametric models can be best identified as a compromise between fully nonparametric and fully parametric stipulations. Partially linear regression-type models, single index and varying coefficient specifications are also discussed.
-
Software.
The article lists several software for learning regression methods and two dimensional density, including the EasyReg, Limdep and R.
Have a comment about this page?
Please, contact us. If this is a correction, your suggested change will be reviewed by our editorial staff.