Easy to Spot Problems With an Econometric Paper
Econometrics
A spatial econometric model describes a vector of variables whose values are defined in space, measured in different locations and spatially dependent, conditional upon covariates (spatial autocorrelation).
From: Handbook of Energy Economics and Policy , 2021
Ecological Economics of Estuaries and Coasts
S. Liu , ... X. Wang , in Treatise on Estuarine and Coastal Science, 2011
12.04.2.2.2 Integrated dynamic ecosystem modeling and function transfer
A model is a "simplified concept within the human mind by which it visualizes reality" (Odum, 1996). It is, above all, a synthesis of elements of knowledge about a system (Jorgensen, 1997; Dale and Van Winkle, 1998). Models can be a great tool to overcome obstacles that prevent a better, more effective linkage between environmental science and policy, by enhancing the understanding of the dynamics associated with biophysical environment, the human society, and economy (Slocombe, 1993a, 1993b). Furthermore, they can be used either to explicitly show the linkages and responses of the system to different interventions – and as a result to different scenarios of policies – or to help in the investigation of areas where more research needs to be done (Costanza et al., 1990; Rykiel, 1996). Models can also be used to enhance public involvement and to build consensus toward policymaking (Costanza and Ruth, 1998; van den Belt, 2004).
Dynamic system models are models built according to scientific laws, often described by simple equations, focusing on the interactions among all parts of a complex system, and frequently sacrificing details and focusing on relevant aspects of an issue, in order to simulate the system in its entirety (Lambin, 1994). Equations and random numbers are employed in dynamic system models to describe some expected behaviors that are well documented in the literature. Time series and other indicators are employed whenever sufficient data are available (Hendry, 1997). In the absence of data, relations are built to mimic the qualitative aspects of the problem, as described in the literature, or formulated by extensive knowledge of the problem. Dynamic system models are useful tools to investigate the system's response and sensitivity to direct and indirect feedbacks and loops, and to predict different scenarios given different policy choices (Costanza et al., 1993). This is particularly the case of ecological economic system models, whose uses normally range from (1) understanding system behavior; (2) developing realistic applications; and (3) investigating policy alternatives (Costanza and Voinov, 2002). In such models, links and relations between and within the different sectors of the model are developed by establishing direct and indirect connectors between state and auxiliary variables. The economic principles of substitution, opportunity cost, time value, scarcity, and general equilibrium, among others, are well handled in dynamic models, as are basic economic assumptions. (For more information about this topic, see Chapter 12.05.)
Econometric/statistical models, such as the ones employed in benefit transfer, combine theory with statistical data in a formal quantitative framework (Lambin, 1994; Hendry, 1997), with the ultimate goal of interpreting relationships between observed economic variables. According to Hendry (1997), they have four main uses:
- 1
-
summarize data and, by doing so, limit the number of variables of interest;
- 2
-
interpret empirical evidence;
- 3
-
evaluate the explanatory power of competing theories; and
- 4
-
accumulate and consolidate empirical knowledge of how economies function.
On the one hand, the use of such models for valuation purposes leads to their better acceptability within the scientific/policy arena, as one can more easily have a grasp of a certain phenomena that is limited by a limited number of explanatory variables. The selection of a limited number of variables, on the other hand, imposes significant constrains to the analysis of the problem, and on the valuation, because quite often their direct and indirect relations cannot be described within the framework of the proposed model. Probably, the main difficulty in building these models is precisely the establishment of the somewhat intuitive selection of variables to be correlated and to have their quantitative relationships assessed (Costanza and Ruth, 1998). Other limitations of such models are the extent of one's knowledge about the modeled system (as in all other modeling approaches) and their often static, equilibrium-oriented character, as well as their great reliance on data that are often in short supply, highly aggregated, heterogeneous, nonstationary, time-dependent, and interdependent. Stochasticity in random coefficient models is often limited by the use of constant parameters in equations (Hendry, 1997).
Both the approaches have been used to assess, for example, the costs and benefits of deforestation in the Brazilian Amazons. In the dynamic system category, Portela and Rademacher (2001) addressed the socioeconomic character of deforestation in the Brazilian Amazon, modeling the economic incentives to deforestation and population growth – which were considered to be the main factors in the clearing of the rainforest. In this model, an economic trend index was used to reflect the way that different incentives for development compound one another and influence the rates at which land speculation and clearing take place. This model provides a monetary value for the services assessed (i.e., climate regulation, erosion control, nutrient cycling, and biodiversity) and compares that to the annual revenue derived for land uses for which the forest is cleared. Using an econometric model, Andersen and Reis (1997) investigated the roles that subsidies have on the economic development of the region, and the tradeoffs between economic growth and deforestation, particularly with respect to road construction. This model was later expanded into a dynamic and spatial econometric model based on county-level data for the entire Brazilian Amazon from 1970 to 1996, to investigate the effects of some incentives on both economic growth and forest protection (Andersen et al., 2002).
As it is quite evident, both the approaches are useful and relevant given different needs. The advantage of dynamic system models over statistical models is their use of our scientific knowledge of a system and, as a result, its transferability to new applications (Costanza and Ruth, 1998). Because their construction is based on fundamental concepts which are present in other systems, dynamic models, unlike the statistic ones, do not rely on historical or cross-sectional data to have relationships identified or demonstrated, such as those observed on regression equations derived from statistical models (Costanza and Ruth, 1998).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123747112012043
ENERGY
R. Kemp , S. Pontoglio , in Encyclopedia of Energy, Natural Resource, and Environmental Economics, 2013
Results from Econometric Studies
Econometric studies use official statistical data to look at real outcomes of real policies. They have been used to study the effects of environmental policies on a broader range of eco-innovations, including innovations of products, cleaning processes, and waste management activities. Most studies use patents as the measure for innovation.
Reasons of space prevent one from providing a survey of his/her own. Instead of giving a summary, one presents the conclusions of two authoritative studies, together with the results from two important studies into the innovation effects of emission trading in the United States and Europe. The first survey of econometric studies is that of Jaffe and others. It is not exclusively limited to econometric studies but they feature prominently in it. The focus is on the United States. The main conclusion of this study is that "market-based instruments for environmental protection are likely to have significantly greater, positive impacts over time than command-and-control approaches on the invention, innovation, and diffusion of desirable, environmentally-friendly technologies."
The findings of more recent studies are incorporated in the OECD report 'Impacts of environmental policy instruments on technological change' prepared by Vollebergh. The OECD report is an updated survey of the empirical literature addressing the question whether there is any evidence that different environmental policy instruments have different effects on the rate and direction of technological change.
The main conclusion of the OECD review is that environmental regulation has a demonstrated impact on technological change in general. Effects on invention, innovation, and diffusion of technologies are clearly observable. With regard to the hypothesized superiority of market-based mechanisms, it is stated that it is difficult to compare the impacts of different instruments because the studies analyzed vary greatly in methods and the instruments are different in design features and local circumstances. It is said that "the common (and rather broad) distinction between command and control regulations and market-based instruments may sometimes be too general, and may require modification. Nevertheless, in choosing between both sets of instruments, it is still important to note that 'financial incentives for technology development are usually stronger under market-based instruments' (e.g., a tax)." The proper design of instruments is said to be extremely important. This conclusion, which is also found in Popp et al., has been taken up by technological-change economists, for example, Johnstone and Hascic.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123750679000681
Economic Growth and Energy
David I. Stern , in Encyclopedia of Energy, 2004
3.1 Energy and Capital: Substitution and Complementarity
Econometric studies employing the translog and other functional forms have come to varying conclusions regarding whether capital and energy are complements or substitutes. These studies all estimate elasticities at the industry level. It seems that capital and energy are at best weak substitutes and possibly are complements. The degree of complementarity likely varies across industries, the level of aggregation considered, and the time frame considered.
There are few studies that look at macroeconomic substitution possibilities. In a 1991 paper, Robert Kaufmann and Irisita Azary-Lee demonstrated the importance of accounting for the physical interdependency between manufactured and natural capital. They used a standard production function to account for the indirect energy used elsewhere in the economy to produce the capital substituted for fuel in the U.S. forest products sector. They found that from 1958 to 1984, the indirect energy costs of capital offset a significant fraction of the direct fuel savings. In some years, the indirect energy costs of capital are greater than the direct fuel savings. The results of Kaufmann and Azary-Lee's analysis are consistent with the arguments made previously that scale is critical in assessing substitution possibilities. In this case, the assessment of substitution at one scale (the individual sector) overestimates the energy savings at a larger scale (the entire economy).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B012176480X001479
Regional Science
H.G. Overman , in International Encyclopedia of Human Geography, 2009
Spatial Econometrics
Econometrics is used in regional science, as in economics and other social sciences, to give empirical content to theory and to test hypotheses derived from that theory. To take a simple example, many location models predict that trade between locations is decreasing with respect to the distance between those locations. Econometrics can be used to test whether trade does indeed decline with distance (i.e., test a hypothesis derived from theory) and, if so, to provide an estimate of the degree to which trade declines as distance increases (i.e., to provide empirical content to the theory). While general econometric methods have been broadly applied in regional science, it is particularly associated with the development and application of spatial econometrics. Spatial econometrics traces its origins to the early 1970s when attempts were made to begin to deal with the methodological issues that arise in multiregion models when there is some form of statistical dependence between outcomes in different regions. Of course, aspatial econometrics also worries about such issues, but what sets spatial econometrics apart is its concern with spatial dependence. That is, with the notion that geographical space, broadly defined, would help shape the nature of any dependence. Spatial econometrics is also concerned with spatial structure or heterogeneity. Again, the feature that distinguishes spatial from aspatial econometrics is the concern with understanding and allowing for the role of heterogeneity across geographical space.
There are three main reasons for considering spatial effects, including spatial dependence and heterogeneity. First, the validity of a number of commonly used econometric techniques is based on underlying assumptions that will be violated in the presence of these spatial effects. Thus, correcting for these spatial effects is important if one is to reach valid conclusions about the nature of the relationships of interest. This 'space as nuisance' view of spatial effects has been a key concern of the spatial econometrics literature. Second, correctly modeling spatial effects can help extract information from data and improve predictions of spatially determined variables, even in situations where we do not understand why such spatial effects occur. This 'space as a source of information' view of spatial effects has long been a concern of the spatial statistics literature and has been of considerable interest in some areas of physical geography (e.g., kriging). In contrast to these nuisance and information views, the third reason to consider spatial effects is because 'space matters'. That is, the interest is in developing techniques that allow one to explain how space affects the relationship of interest. While clearly not mutually exclusive concerns, these three contrasting views, and the need to balance research efforts to address them, represent a source of ongoing tension in terms of spatial econometrics relationship both with regional science and with the wider social science community.
Initial interest in spatial econometrics came from researchers interested in multiregion models. Space clearly matters here, but this was not necessarily reflected in early developments which focused on detecting and correcting for residual spatial autocorrelation or on improving predictions in the presence of such autocorrelation. To take a stylized example, imagine a researcher interested in whether crime rate in a neighborhood was determined by socioeconomic characteristics of individuals living in the neighborhood. After collecting appropriate neighborhood data, the researcher runs a linear regression of crime rate on selected neighborhood characteristics. Using the estimated model the researcher is able to predict neighborhood crime rates on the basis of the available socioeconomic data. These predicted neighborhood crime rates can be compared to the actual and an unexplained 'residual' calculated as the difference between the two. Those residuals should be random thus displaying no systematic pattern. One possible departure from randomness, and a key issue of interest of spatial econometrics, concerns the spatial pattern of these residuals. For example, when plotted on a map the residual for a given neighborhood should be unrelated to those of other neighborhoods nearby. If, in contrast, positive residuals in one neighborhood tend to be associated with positive residuals in nearby neighborhoods (and similarly for negative), then the residuals display spatial autocorrelation. At best, this has implications for the statistical significance of the researcher's findings; at worst it means that the strength or even the direction of estimated relationships may be wrong. In addition, if interest is in predicting crime rates per se, then using the information on the nature of this spatial autocorrelation may help improve those predictions even if we do not understand the socioeconomic processes that actually drive that autocorrelation.
Clearly it would be useful if these kinds of errors could be detected and the spatial econometrics literature (often using insights from spatial statistics) has developed tests to do just that. The two most common are Moran's I and Geary's C, although other measures are available. Clearly, if spatial autocorrelation is detected the regression model should be respecified. Exactly how it should be respecified, however, depends on the source of the spatial autocorrelation. There are three possibilities, best illustrated through the continued use of the example on the relationship between neighborhood crime and socioeconomic characteristics. The first possibility is that the crime rate in a neighborhood increases and this, in turn, directly increases the crime rate in nearby neighborhoods. For example, a rise in crimes in a neighborhood encourages copycat crimes in nearby neighborhoods. This can be captured in the regression model through the inclusion of information on crime rates in nearby neighborhoods. The second possibility is that the socioeconomic characteristics of a neighborhood change in a way that increases crime in that neighborhood and also directly increases crime in nearby neighborhoods. For example, the number of young people in a neighborhood increases and they commit crimes in both that neighborhood and nearby neighborhoods. This can be captured in the regression model through the inclusion of information on the socioeconomic characteristics of nearby neighborhoods. The third possibility is that unexpectedly high crime rates in one neighborhood tend to be associated with unexpectedly high crime rates in nearby neighborhoods but that this effect does not work directly (through, e.g., copycat crime) or indirectly (through socioeconomic characteristics). This happens when there are factors that cause crime that are unobserved (at least to the researcher) and correlated across neighborhoods. This can be captured by assuming that there is spatial autocorrelation between the residuals of neighborhoods. That is, one solution to the problem of spatial autocorrelation of the residuals is specifically to allow for the spatial autocorrelation of the residuals in a revised specification! This feels somewhat circular and, in terms of understanding the underlying socioeconomic processes, is only appropriate if one can rule out the other two mechanisms through which spatial autocorrelation arises.
This discussion may well give the impression that it is hard to distinguish between these three different possibilities. The more formal treatment available in standard spatial econometrics texts confirms that this is indeed the case. It would be fair to say that these identification problems have received little attention in the spatial econometrics literature. Attention has, instead, focused on the specification and estimation of linear spatial regression models (including debates around the determination of appropriate 'spatial weight matrices') and the formal properties of the resulting estimators and associated test statistics. Efforts also went in to expanding the spatial approach to include panel data and discrete choice estimation. Increasingly, this emphasis and a growing interest in spatial dependence have moved spatial econometrics in to the mainstream econometrics literature.
While admirable, this progress in dealing with space as nuisance and as a source of information for prediction has not, however, been matched by comparable advances in the applied spatial econometric literature in increasing our understanding of situations in which space matters. There are two main problems here. First, the focus of too many applied spatial econometrics papers is on the implementation of spatial econometrics with the result that far too little attention is paid to constructing analysis that are informative about theory. The burgeoning growth convergence 'industry' is a good example of this. When attention is focused more directly on theory, the problem is that the proposed tests of many theoretical propositions regarding spatial behavior do not properly identify the precise mechanism through which interdependence occurs. Of course, in the spatial setting, this sort of identification is extremely difficult. In the crime example above, it is almost impossible to determine whether spatial interdependence in crime rates works through the direct or indirect mechanism. To separate out these two mechanisms one would need a way to exogenously change crime rates in one neighborhood and see what effect this had on nearby neighborhoods. In reality, the only way through which this might happen is by changing the socioeconomic characteristics of a neighborhood, but then both mechanisms will be in operation and there is no way to separate them out. In some situations, it may be possible to directly change the dependent variable but even then any change needs to be independent of changes in the other explanatory variables. For example, when considering tax competition between jurisdictions it may be possible to identify the interaction between tax rates, providing that changes do not reflect other changes in the neighborhoods. More attention to deriving clear predictions from theory and the associated search for identification should be central to the application of spatial econometrics by regional scientists trying to test spatial theories. It is not, and as a result, while spatial econometric theory is moving in to the mainstream econometrics literature, much applied spatial econometrics is ignored by mainstream economics. Of course, acceptance by mainstream economics is not the objective of many regional scientists. But the crucial issue here is the reason for that rejection not the rejection per se. A similar story, also involving the link between theory and empirics, plays out with respect to regional impact models, which represent another set of key methodological tools in regional science.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780080449104007379
Nuclear Power Economics
Geoffrey Rothwell , in Encyclopedia of Energy, 2004
3.1.4 Plant Costs: Empirical Evidence and Projections
The econometric analysis of U.S. NPP plant costs began in the mid-1970s. This literature focused on the issue of whether there were economies of scale in NPP generating capacity, controlling for the year of commercial operation, construction duration, and plant characteristics, such as location, number of units at the site, and NSSS manufacturer. Although early analyses found increasing returns to scale, analyses of later data could not reject constant returns to scale, primarily due to the imprecision of the estimate of scale parameters when large, expensive units came into production after the accident at Three Mile Island in 1979. For example, plant costs at Watts Bar, the last unit to attain commercial operation in the United States, were nearly $7 billion (in mixed-nominal dollars), or about $6000/kW with a construction duration of 24 years from 1972 to 1996.
Estimated costs for advanced light water reactors, based on more recent experience (primarily in Asia), are much lower. In NEA (1998), plants are either commercially available or expected to be commercially available between 2005 and 2010. Base construction cost varies between $1020 in China (using Chinese technology) and $2521 in Japan (for an ABWR).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B012176480X00142X
Management of Water Resources
B. Dziegielewski , D.D. Baumann , in Treatise on Water Science, 2011
1.10.4.3.1 Model-dependent prediction intervals
In econometric forecasts, each empirically derived model can be tested for specification error by using Ramsey's specification tests (Ramsey (1969), the Breusch–Pagan–Godfrey test Breusch and Pagan (1979)), Glejser's test (Glejser, 1969), and Harvey's test and White's test (White, 1980) for heteroscedasticity. The specification and heteroscedasticity tests allow the analysts to develop predictive equations which minimize the errors from misspecification of the model and biases in model parameters. Other model-dependent errors can be quantified using confidence intervals (Dziegielewski et al., 2005).
For example, assuming that the errors are normally distributed in a log-linear model in which Y designates water use, it can be shown that
(36)
Thus, in log-linear models, the predicted value denoted as is given by
(37)
where is the mean square error of the log-linear model and the predicted value obtained from the log-linear models.
It is straightforward to obtain the in-sample prediction confidence intervals in a linear model. However, in a log-linear model, the in-sample prediction intervals are obtained under the assumption that the errors are normally distributed. Thus, for normally distributed errors the variance of in Equation (37) is estimated by
(38)
where is the square of the standard error of the logarithmic prediction (i.e., lnYit ), m the degrees of freedom, and the mean square error of the log-linear model. The standard error of is denoted as
(39)
Assuming that the are asymptotically normally distributed, the confidence interval for the prediction can be obtained as , where z α/2 is the critical value from a normal distribution for a prespecified α. However, for the out-of-sample predictions, the square of the standard error of the logarithmic prediction (i.e., ) is not available. To rectify this, one can use the average standard error of predictions (average over all observations in the historical data).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444531995000178
Philosophy of Econometrics
Aris Spanos , in Philosophy of Economics, 2012
1 Introduction
Philosophy of econometrics is concerned with the systematic study and appraisal of general principles, statistical procedures and modeling strategies, as well as philosophical presuppositions that underlie econometric methods, with a view to evaluate their effectiveness in achieving the primary objective of 'learning from data' about economic phenomena of interest. In philosophical jargon it is a core area of the philosophy of economics, concerned primarily with epistemological and metaphysical issues pertaining to the empirical foundations of economics. In particular, it pertains to methodological issues having to do with the effectiveness of statistical methods and procedures used in empirical inquiry, as well as ontological issues concerned with the worldview of the econometrician. Applied econometricians, grappling with the complexity of bridging the gap between theory and data, face numerous philosophical/methodological issues pertaining to transforming non-experimental, noisy and incomplete data into reliable evidence for or against a substantive hypothesis or a theory.
Discussions of econometric methodology since the late 1970s have been primarily 'local' affairs [see Granger, 1990; Hendry et al., 1990; Hendry, 2000; Leamer 1978; Pagan, 1987; Sims, 1980; Spanos, 1988; 1989], where no concerted effort was made to integrate the discussions into the broader philosophy of science discourses concerning empirical modeling; some notable recent exceptions are [Hoover, 2002; 2006], [Keuzenkamp, 2000] and [Stigum, 2003]. In certain respects, other social sciences, such as psychology, sociology or even political science, have been more cognizant of methodological issues pertaining to statistical inference and modeling; see [Morrison and Henkel, 1970; Lieberman, 1971; Harlow et al., 1997]. A recent exception in economics is [Ziliak and McCloskey, 2008].
The philosophy of econometrics, as an integral part of economic modeling, is currently at its infancy, with most econometricians being highly sceptical about the value of philosophical/methodological discussions. The focus of the econometric literature since the early 1960s has been primarily on technical issues concerned with extending estimation and testing procedures associated with the Classical Linear Regression (CLR) and related models in a number of different directions. These modifications/extensions are theory-dominated and driven by the objective to 'quantify theory-intimated (structural) models'. As a result, the focus has been on (a) technical problems such as endogeneity/simultaneity, dependence, heterogeneity, heteroskedasticity and non-linearity, and (b) different types of data (time series, cross-section and panel); see [Greene, 2000; Kennedy, 2008].
The methodology of economics literature, although extensive, so far has focused primarily on issues such as the status of economic assumptions, the structure of economic theories, falsification vs. verification, Kuhnian paradigms vs. Lakatosian research programs, the sociology of scientific knowledge, realism vs. instrumentalism, 'post-modernist' philosophy, etc.; see [Backhouse, 1994; Blaug, 1992; Davis et al., 1998; Mäki, 2001; 2002; 2009; Redman, 1991]. Even in methodological discussions concerning the relationship between economic theories and reality, econometrics is invariably neglected [Caldwell, 1994, p. 216] or even misrepresented [Lawson, 1997]. Indeed, one can make a case that, by ignoring the philosophical issues pertaining to empirical modeling, the literature on economic methodology has painted a rather lopsided picture of the relevance of the current philosophy of science in availing philosophical/methodological problems that have frustrated economics in its endeavors to achieve the status of a credible empirical science. When assessing the current state of philosophy of science and its value for economic methodology, Hands [2001] argued that philosophy of science is "currently in disarray on almost every substantive issue" and provides "no reliable tool for discussing the relationship between economics and scientific knowledge." (p. 6). I consider such admonitions unhelpful and believe that parts of current philosophy of science focusing on 'learning from data' (see [Chalmers, 1999; Hacking, 1983; Mayo, 1996]) have a lot to contribute toward redeeming the credibility of economics as an empirical science.
In recent discussions on the financial crises that burst onto the scene in September 2008, the economists participating in the debate concerning the different policies on how to deal with the deepening recession were invariably invoking causal knowledge between key policy variables, like government expenditure, and macro aggregates like the Gross Domestic Product (GDP). The problem was that all they had to offer as evidence for their claimed knowledge was a combination of strong beliefs in the appropriateness of their particular economic perspective (Classical, Keynesian, Neo-Keynesian, monetarist, Neo-Classical, etc.), combined with armchair empiricism based on analogical reasoning from past 'similar' episodes. Establishing causal knowledge will require a lot more than that, including securing the statistical and substantive adequacy of the models appealed to. Unfortunately, the current econometric literature seems rather oblivious to this crucial problem. Indeed, a closer look at the empirical evidence published in prestigious journals over the last half century reveals heaps of untrustworthy estimates and testing results which provide at best a tenuous, if any, connection between economic theory and observable economic phenomena, and facilitate no veritable learning from data; see [Spanos, 2006a].
The main thesis of the paper is that without proper philosophical/methodological foundations to guide the practitioner on how to properly use the various statistical procedures, as well as interpret the resulting inferences, no veritable knowledge can be accumulated using data modeling. Accretions of statistical methods with ever increasing technical sophistication to quantify one's favorite structural (estimable theory) model, without the underlying philosophy of when and how to apply such procedures in order give rise to reliable inferences, will continue to add to the mountains of untrustworthy evidence. Indeed, the increasing technical sophistication makes matters worse by giving practitioners a sense of misplaced faith in the credibility of the evidence produced by such procedures; see [Spanos, 2010c].
The main aim of this paper is to attempt a demarcation of the intended scope of a philosophy of econometrics with a view to integrate its subject matter into the broader philosophy of science discourses. An important objective is to bring out the potential value of a bidirectional relationship between philosophy of science and applied fields in the social sciences. Econometrics can benefit from the broader philosophical discussions on 'learning from data', and philosophy of science can enrich its perspective by paying more attention to the empirical modeling practices in disciplines, like econometrics, which rely primarily on observational (non-experimental) data.
In section 2, a simple empirical example is used to bring out the diversity and complexity of philosophical/methodological issues raised by such modeling attempts in applied econometrics. Section 3 attempts to provide a highly selective summary of 20th century philosophy of science, focusing primarily on aspects of that literature that pertain to empirical modeling. Section 4 brings out the foundational issues bedeviling statistical inference since the 1930s, as a prelude to section 5 which discusses the error-statistical perspective [Mayo and Spanos, 2010b], as providing an appropriate framework for a philosophy of econometrics. This perspective is presented as a refinement/extension of the Fisher-Neyman-Pearson (F-N-P) approach to statistical induction, which can be used to effectively address some of the inveterate foundational problems that have bedeviled frequentist statistical inference since the late 1930s. The error-statistical approach is further developed in section 6 to secure the trustworthiness of evidence for or against substantive claims. The error statistical perspective is then used in section 7 to shed new light on a number of crucial philosophical/methodological problems pertaining to econometrics.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444516763500130
Econometric modelling and forecasting of wholesale electricity prices
Alessandro Sapio , in Handbook of Energy Economics and Policy, 2021
3.1.2 Heteroskedastic and stochastic volatility models
The inability of ARMA-type models to account for time-dependence and clustering in volatility has been tackled through models that incorporate conditional heteroskedasticity. In conditionally heteroskedastic models, the conditional variance (e.g., σ 2 in an AR(p) process) is not constant. In one strand of models, the conditional variance depends on some of the variables of interest; other models assume it is serially correlated.
A model corresponding to the first case is the stochastic variance AR(p) model described in Ghysels et al. (1996). In the model, the variance of the error term is a function of lagged prices, and hence, it is time varying:
(15.13)
with
where ϵ t is now an i.i.d. error term with 0 mean and unit variance. The exponent γ > 0 and polynomial ψ(B) tune the dependence of the conditional variance on the previous prices.
The Auto-Regressive Conditionally Heteroskedastic (ARCH) model has been introduced by Engle (1982), originally for applications in finance, to model data characterised by serially correlated conditional variance. Specifically, the conditional variance of the error term reads:
(15.14)
with constant coefficients ω and the terms in the α(B) polynomial. Because the conditional variance is positive by definition, the ARCH model imposes non-negativity constraints upon coefficients. The ARCH process is weakly stationary if and only if . The process is characterised by positive excess kurtosis even if ϵ t is Gaussian.
More generally, the conditional variance can depend on a moving average component, as proposed by Bollerslev (1986), leading to a Generalised Auto-Regressive Conditionally Heteroskedastic (GARCH) model:
(15.15)
where α(B) and β(B) are polynomials of order p and q, respectively. Non-negativity constraints on coefficients hold for the GARCH model as well.
Applications in electricity econometrics couple GARCH processes for the conditional variance with an ARMA process for the price mean, giving rise to ARMA–GARCH models. These models, thus, include two equations: one for the mean and one for the conditional variance that are estimated jointly. 12 The equation for the mean can also be an ARFIMA process, as in the study by Gianfreda and Grossi (2012) among others, as well as regime-switching and jump-diffusion models (Sections 3.1.3 and 3.1.4).
Several generalisations of the GARCH model have been considered in the literature, to encompass some empirical facts or to allow testing theoretical hypotheses. One approach modifies the equation for the mean, leading to the GARCH-in-mean model, wherein the equation for the mean includes a measure of volatility, such as the square root of the conditional variance (Engle et al., 1987). Convexity in the electricity supply stack implies that in volatile market sessions, the average price is higher. Hence, the coefficient associated to the volatility term in the mean equation is expected positive. Other approaches involve the equation for the conditional variance. In the exponential GARCH (Nelson, 1991), the conditional variance depends on both the magnitude and the sign of past residuals. Technically speaking, this model allows to overcome the non-negativity restriction on coefficients. More importantly, it allows to model the inverse leverage effect (see Section 2.4). The threshold GARCH (TGARCH) model (Zakoian, 1994) seeks to match the empirical evidence, suggesting the existence of discontinuities in the time series behaviour of electricity prices. As market conditions change, it often appears that market-clearing prices transition to different regimes, characterised by different means, variances, as well as autoregressive and heteroskedasticity properties.
Stochastic variance and GARCH-type models are well suited to reproduce volatility clustering and long tails, as well as mean reversion when coupled with ARMA models for the mean. Yet, despite efforts to model discontinuities (e.g. the TGARCH model), these models do not provide a good-enough match to the evidence of spikes and structural changes in the process driving the electricity price. These considerations have paved the way for two further model families: regime-switching (RS) and jump-diffusion models.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128147122000154
Valuing the Ocean Environment
Frank Ackerman , in Managing Ocean Environments in a Changing Climate, 2013
Tourism and Climate Change
Tourism is a very climate-dependent activity, in which the customers—though not the facilities that serve them—are able to move almost immediately to different parts of the world. It is therefore a leading candidate for disruptive economic effects of climate change. The literature in the field includes several attempts to identify the ideal climate for a tourist destination, including not only temperature but also humidity, precipitation, wind speed, and hours of sunlight. It is not surprising to learn that climate change is expected to shift tourist preferences toward higher latitudes and/or nonsummer seasons (Amelung et al., 2007).
A complete description of climate change impacts on tourism would require a complex analysis of expected losses in current warm-weather destinations versus gains in cooler locations. (Winter tourism, such as skiing trips, is small by comparison with warm-weather tourism and is excluded from this discussion.) Even if tourists can switch destinations at once, there may still be important transitional costs, since the hotels, restaurants, and other tourist facilities cannot immediately follow them. For example, as Europe gets hotter, German tourists can easily decide to spend holidays in Norway instead of Spain, but the Spanish tourist industry will be left behind with substantial losses, while Norway will have the expense of building additional hotels.
There have been several econometric analyses of international tourism, but there is not yet a consensus on even the most basic points. For example, the optimum temperature for a tourist destination has been estimated at a 24-h year-round average of 14 °C (Hamilton et al., 2005) or a 24-h average for the warmest month of the year of about 21 °C (Lise and Tol, 2002). In California, San Francisco would be optimal by the former standard, while San Diego is close to perfect by the latter. 8 While other factors are also important to the choice of tourist destinations, most Caribbean and Pacific islands are already well above both of these standards and are likely to become less attractive as temperatures rise.
A detailed examination of climate impacts on Caribbean islands compared two scenarios, assuming 1.2 and 5.4 °C warming from 2000 to 2100 (Bueno et al., 2008). Building on an earlier World Bank methodology for estimating tourism impacts, the study projected climate-related losses in the two scenarios of 7.0% and 35.3% of Caribbean tourism by 2100.
For long-term forecasts of the growth of tourism, it is essential to estimate how demand will change as incomes rise. This is typically expressed in terms of the income elasticity of demand, defined as the percentage increase in demand that occurs when incomes rise by 1%. Products with income elasticity greater than one represent a bigger share of consumption for the rich than the poor; this is characteristic of luxuries, but not necessities. Since international tourism is unmistakably a luxury, it would be expected to have income elasticity greater than one. Although there have been a few estimates of tourism elasticities below one (e.g., Hamilton et al., 2005), most estimates are much higher. A meta-analysis of early studies reports a mean income elasticity of 1.86 (Crouch, 1995). A more recent study reports 79 estimates published since 2000; the median is greater than 2, and only 9 of the estimates are less than 1 (Song et al., 2010). Using a sophisticated econometric methodology, the same study estimates an income elasticity of 1.36 for tourist visits to Hong Kong.
Quite a bit of money is at stake in ocean tourism. A global estimate of the value of selected marine recreational activities, encompassing recreational fishing, diving (both snorkeling and scuba diving), and whale-watching, found that worldwide expenditure on these activities was $47 billion in 2003, most of it ($40 billion) for recreational fishing. These activities involved 121 million participants and created more than 1 million jobs. The same study cited an estimate that spending in the United States alone on these three forms of recreation was about $30 billion in 2003, suggesting that the $47 billion global estimate could be too low (Cisneros-Montemayor and Rashid Sumaila, 2010). For recreational fishing, the dominant part of this calculation, the negative impacts from climate change may be parallel to those anticipated for commercial fishing.
Coral reefs, a focal point for ocean tourism, are an irreplaceable ecosystem that will be one of the first to be threatened by global environmental change. Some researchers have found that at 1.7 °C above preindustrial temperatures, all warm-water coral reefs will be bleached, and by 2.5 °C, they will be extinct (IPCC 2007 Working Group II, Ch. 4; Carpenter et al., 2008). These temperatures are much lower than for most other kinds of expected climate damages—and the world is already roughly 0.8 °C above preindustrial temperatures. If climate change continues unabated, it seems likely that the colorful reefs, and the tourism industry that has grown up around them, will be gone well before the end of the century. Temperature increases are only one of several threats to coral reefs, along with acidification and pollution; the synergistic effects of all three occurring simultaneously make the picture even more ominous (see Chapter 6). Thus, the avoidable economic impacts of climate change could include the complete loss of the existing coral reef tourism industry.
The threat is all too real: coral bleaching is not only an ecological problem but a crisis for tourism revenues as well. Tourism revenues drop sharply after coral bleaching, as many tourists choose other destinations. Losses attributable to coral bleaching range from tens of millions of dollars for a single country to billions of dollars on a larger scale; the long-run economic costs of a very serious 1998 coral bleaching event in the Indian Ocean may reach $3.5 billion in lost tourism revenues, in addition to almost $5 billion in other coral reef ecosystem services (Pratchett et al., 2008).
There is an extensive literature estimating recreational values of individual coral reefs, producing widely divergent estimates. A meta-analysis found that reef visitors place a higher value on locations with a larger area of dive sites and fewer other tourists. It also found what appeared to be a lower quality of research than in other meta-analyses of environmental values; both methodology and authorship were significant explanatory variables, suggesting a lack of consistency in approach to the question (Brander et al., 2007). For an additional compilation of studies, see Conservation International (2008).
A widely cited 2003 report—supported by World Wildlife Fund and the International Coral Reef Action Network, and written by leading researchers in the field—estimated the net benefit of the world's coral reefs, if well managed and intact, at U.S. $29.8 billion in 2001. Of that amount, $9.6 billion was from tourism and recreation, $9.0 billion from coastal protection, $5.7 billion from fisheries, and $5.5 billion from the value of biodiversity. Half the tourism and recreation value and more than 40% of the total value came from coral reefs in Southeast Asia (Cesar et al., 2003).
As that study and others have noted, reef-related tourism is growing rapidly; the total today is likely well beyond the $9.6 billion estimate for 2001. For example, that global estimate included $1.1 billion in net benefits of coral reef tourism in Australia. An Australian consultants' study (using different definitions and methods) found that Great Barrier Reef tourism contributed $4.9 billion in value added and $6.0 billion in GDP to the Australian economy 9 in 2005-2006 (Access Economics Pty Limited, 2007). For references to additional studies reaching similar conclusions about the values of the Great Barrier Reef, see Stoeckl et al. (2011).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124076686000100
An econometric approach for Germany's short-term energy demand forecasting
Symeoni Soursou Eleni , in Mathematical Modelling of Contemporary Electricity Markets, 2021
3.2.2 ARMA and ARMAX
ARMA modelling is also an econometric technique dedicated to univariate models, whilst it is the combination of an autoregressive (AR) and moving average (MA) model respectively. Thus, two conditions have to be met for an ARMA model: stationarity and invertibility (Box G. a., 1976). The term invertibility describes the ability of an AR (1) or MA (1) model to be transformed to a MA (∞) and AR (∞) respectively (Vamvoukas, 2008).
The mathematical expression of an ARMA models is expressed as follows:
(2.5)
If the time series Y t is stationary, then the median is defined:
(2.6)
Assuming a general ARMA model of order (p, q) and the case of s ≤ q the initial value of the autocorrelation coefficient ρ s depends on the initial values of α p and β p (Vamvoukas, 2008) . While if s > q is in effect, then the ARMA's ρ s and γ s are identical to these of AR (p):
(2.7)
And
(2.8)
Finally, under the ARMAX framework, independent variables may be included in the ARMA process (Md Hasanuzzaman, 2019). This special multivariate ARMA or ARIMA models allow the existence of independent variables in two distinct ways; first the independent variables take part during the specification process (Hamilton, 1994), otherwise they reduce to the Box-Jenkins ARIMA models in the dependent variable (Stata). Fig. 2.1 provide the Box-Jenkins methodology: 4 Step Evolution of ARMA/ARIMA Models.
Fig. 2.1. Box-Jenkins methodology: 4 Step Evolution of ARMA/ARIMA Models.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128218389000025
campbellstrel1987.blogspot.com
Source: https://www.sciencedirect.com/topics/earth-and-planetary-sciences/econometrics
0 Response to "Easy to Spot Problems With an Econometric Paper"
Post a Comment