Your cart is empty
Discover how statistical information impacts decisions in today's business world as Anderson/Sweeney/Williams/Camm/Cochran/Fry/Ohlmann's leading STATISTICS FOR BUSINESS AND ECONOMICS, 14E connects concepts in each chapter to real-world practice. This edition delivers sound statistical methodology, a proven problem-scenario approach and meaningful applications that reflect the latest developments in business and statistics today. More than 350 new and proven real business examples, a wealth of practical cases and meaningful hands-on exercises highlight statistics in action. You gain practice using leading professional statistical software with exercises and appendices that walk you through using JMP Student Edition 14 and Excel 2016. WebAssign's online course management systems further strengthens this business statistics approach and helps you maximize your course success.
""Nonparametric Econometrics" by Li and Racine is a must for any serious econometrician or statistician who is working on cutting-edge problems. The theoretical treatment of nonparametric methods is remarkably complete in its coverage of mainstream and relatively arcane topics. I particularly like Li and Racine's general treatment of continuous and discrete regressors and of specification testing, topics that I have not seen handled in such a comprehensive fashion. I will certainly use this in my graduate econometrics courses and in conducting my own research."--Robin Sickles, Rice University
"Very few studies have tried to apply the nonparametric techniques to analyze real data. The lack of applications of those techniques is perhaps attributable to the lack of a good textbook that explains intuitively how and why those techniques work. This book by Li and Racine serves both applied researchers and graduate students. It is written in plain language so that it can be understood by anyone with basic econometrics but zero knowledge of nonparametric methods. And it contains enough specifics that clearly spell out steps to implement those methods."--Chunrong Ai, University of Florida
"This book represents a very significant contribution to the field of econometrics. It provides an extremely thorough coverage of our knowledge in the area of nonparametric and semiparametric methods as they apply to economic models and economic data. And it makes accessible, for the first time, a body of relatively new material relating to discrete and 'mixed' data. There is a good balance of theoretical material and applications. Apart from serving as a superb teaching text in graduate-level courseswhere the students have a strong econometrics/statistics preparation, I believe this book will become a must-have reference resource for many researchers."--David E. Giles, University of Victoria
This full-color book provides a compendium of stimulating facts about the states, presented graphically, and covering a wide array of topics including demographic, economic, environmental, health, and crime variables. Hundreds of attributes are compared side-by-side, from life expectancy to murder rates; from fourth-grade math proficiency scores to the number of food stamp recipients, and from illicit drug use to the rate of firearm background checks per state. Through meticulous organization and use of graphic formats, retrieval of specific information the reader may seek has been greatly facilitated. In addition to the graphs comparing the fifty states for each individual metric, a summary table is provided at the beginning of each chapter along with highlights of pertinent data found in the chapter. While we are one, indivisible nation, at the same time Americans are as diverse from state-to-state as many nations are when compared with other nations. For example: in 2010, 95 percent of Vermont residents were white compared with only 24 percent of residents in Hawaii and 1-in-12 New York residents were Jewish compared with less than 1-in-1,000 Arkansas residents. In Texas, 464 prisoners have been executed over the past 35 years while 16 states have not executed any. More interesting facts found in ranking America's Fifty States include: *Alaska ranked highest or lowest in 31 metrics-more than any other state-followed by Mississippi at 25 and Texas at 20. *Alaska is the only state that does not have a state income tax or a state sales tax. It had the highest revenues per capita from taxes levied on businesses for the extraction of oil and gas and receives the highest federal aid per capita. Alaska had the lowest percent of households with annual income below $15,000. *Over the past decade, over 100 million firearms background checks have been performed nationally, with the highest rate in Utah and the lowest rate in New Jersey *Mississippi had the lowest personal income per capita, median household income, gross domestic product per capita, and lowest male life expectancy rate. Additionally, it had the highest food stamp recipient rate, rate of persons below the poverty level, and infant mortality rate. *Florida had the highest rate of identity theft victims in 2010 followed by Arizona, California, and Georgia. *Texas had the most extreme environmental metrics including the highest major disaster, storm, and wildfire emergency declarations. Texas also had the highest summer air temperature and carbon dioxide emissions level. In addition to extreme environmental metrics, Texas also had the highest property crime rate and high school dropout rate.
This book offers a unique and insightful econometric evaluation of the policies used to fight transnational terrorism between 1990 and 2014. It uses the tools of modern economics, game theory and structural econometrics to analyze the roles of foreign aid, educational capital, and military intervention. Jean-Paul Azam and Veronique Thelen analyze panel data over 25 years across 124 countries. They prove that foreign aid plays a key role in inducing recipient governments to protect the donors' political and economic interests within their sphere of influence. Demonstrating that countries endowed with better educational capital export fewer terrorist attacks, they also illustrate that, in contrast, military intervention is counter-productive in abating terrorism. Recognizing the strides taken by the Obama administration to increase the role of foreign aid and reduce the use of military interventions, this book shows the significant impact this has had in reducing the number of transnational terrorist attacks per source country, and suggests further developments in this vein. Practical and timely, this book will be of particular interest to students and scholars of economics and political science, as well as those working on the wider issue of terrorism. Presenting a series of new findings, the book will also appeal to international policy makers and government officials.
Business Statistics with Solutions in R covers a wide range of applications of statistics in solving business related problems. It will introduce readers to quantitative tools that are necessary for daily business needs and help them to make evidence-based decisions. The book provides an insight on how to summarize data, analyze it, and draw meaningful inferences that can be used to improve decisions. It will enable readers to develop computational skills and problem-solving competence using the open source language, R. Mustapha Abiodun Akinkunmi uses real life business data for illustrative examples while discussing the basic statistical measures, probability, regression analysis, significance testing, correlation, the Poisson distribution, process control for manufacturing, time series analysis, forecasting techniques, exponential smoothing, univariate and multivariate analysis including ANOVA and MANOVA and more in this valuable reference for policy makers, professionals, academics and individuals interested in the areas of business statistics, applied statistics, statistical computing, finance, management and econometrics.
This book is devoted to the analysis of causal inference which is one of the most difficult tasks in data analysis: when two phenomena are observed to be related, it is often difficult to decide whether one of them causally influences the other one, or whether these two phenomena have a common cause. This analysis is the main focus of this volume. To get a good understanding of the causal inference, it is important to have models of economic phenomena which are as accurate as possible. Because of this need, this volume also contains papers that use non-traditional economic models, such as fuzzy models and models obtained by using neural networks and data mining techniques. It also contains papers that apply different econometric models to analyze real-life economic dependencies.
Methods for Estimation and Inference in Modern Econometrics provides a comprehensive introduction to a wide range of emerging topics, such as generalized empirical likelihood estimation and alternative asymptotics under drifting parameterizations, which have not been discussed in detail outside of highly technical research papers. The book also addresses several problems often arising in the analysis of economic data, including weak identification, model misspecification, and possible nonstationarity. The book's appendix provides a review of some basic concepts and results from linear algebra, probability theory, and statistics that are used throughout the book.
Topics covered include:
Offering a unified approach to studying econometric problems, Methods for Estimation and Inference in Modern Econometrics links most of the existing estimation and inference methods in a general framework to help readers synthesize all aspects of modern econometric theory. Various theoretical exercises and suggested solutions are included to facilitate understanding.
Drawing on his extensive experience teaching in the area, Geoff Renshaw has developed Maths for Economics to enable students to master and apply mathematical principles and methods both in their degrees and their careers. Through the use of a gradual learning gradient and the provision of examples and exercises to constantly reinforce learning, the author has created a resource which students can use to build their confidence - whether coming from a background of a GCSE or A Level course, or more generally for students who feel they need to go back to the very basics. Knowledge is built up in small steps rather than big jumps, and once confident that they have firmly grasped the foundations, the book helps students to make the progression beyond mechanical exercises and on to the development of a maths tool-kit for the analysis of economic and business problems - an invaluable skill for their course and future employment. The Online Resource Centre contains the following resources: For Students: Ask the author forum Excel tutorial Maple tutorial Further exercises Answers to further questions Expanded solutions to progress exercises For Lecturers (password protected): Test exercises Graphs from the book Answers to test exercises PowerPoint presentations Instructor manual
In this new and expanding area, Tony Lancaster's text is the first
comprehensive introduction to the Bayesian way of doing applied
The complexity, diversity, and random nature of transportation problems necessitates a broad analytical toolbox. Describing tools commonly used in the field, Statistical and Econometric Methods for Transportation Data Analysis, Second Edition provides an understanding of a broad range of analytical tools required to solve transportation problems. It includes a wide breadth of examples and case studies covering applications in various aspects of transportation planning, engineering, safety, and economics.
After a solid refresher on statistical fundamentals, the book focuses on continuous dependent variable models and count and discrete dependent variable models. Along with an entirely new section on other statistical methods, this edition offers a wealth of new material.
New to the Second Edition
Each chapter clearly presents fundamental concepts and principles and includes numerous references for those seeking additional technical details and applications. To reinforce a practical understanding of the modeling techniques, the data sets used in the text are offered on the book's CRC Press web page. PowerPoint and Word presentations for each chapter are also available for download.
The Economics and Econometrics of the Energy-Growth Nexus recognizes that research in the energy-growth nexus field is heterogeneous and controversial. To make studies in the field as comparable as possible, chapters cover aggregate energy and disaggregate energy consumption and single country and multiple country analysis. As a foundational resource that helps researchers answer fundamental questions about their energy-growth projects, it combines theory and practice to classify and summarize the literature and explain the econometrics of the energy-growth nexus. The book provides order and guidance, enabling researchers to feel confident that they are adhering to widely accepted assumptions and procedures.
Key indicators on trade in goods and services for 196 economies, highlighting the major exports and imports for each economy as well as their main trading partners. Trade Profiles 2018 provides a series of key indicators on trade in goods and services for 196 economies, highlighting the major exports and imports for each economy as well as their main trading partners. For each profile, the data is presented in a handy two-page format, providing a concise overview of global trade. For merchandise trade, exports and imports are broken down by main commodity groups and by major trading partners. Top exports and imports are listed for agricultural products and non-agricultural products. For trade in commercial services, data is provided for transport, travel, other commercial services and goods-related services. Statistics on intellectual property are also provided.
This bestselling and thoroughly classroom-tested textbook is a complete resource for finance students. A comprehensive and illustrated discussion of the most common empirical approaches in finance prepares students for using econometrics in practice, while detailed case studies help them understand how the techniques are used in relevant financial contexts. Worked examples from the latest version of the popular statistical software EViews guide students to implement their own models and interpret results. Learning outcomes, key concepts and end-of-chapter review questions (with full solutions online) highlight the main chapter takeaways and allow students to self-assess their understanding. Building on the successful data- and problem-driven approach of previous editions, this third edition has been updated with new data, extensive examples and additional introductory material on mathematics, making the book more accessible to students encountering econometrics for the first time. A companion website, with numerous student and instructor resources, completes the learning package.
One of the major problems of macroeconomic theory is the way in which the people exchange goods in decentralized market economies. There are major disagreements among macroeconomists regarding tools to influence required outcomes. Since the mainstream efficient market theory fails to provide an internal coherent framework, there is a need for an alternative theory. The book provides an innovative approach for the analysis of agent based models, populated by the heterogeneous and interacting agents in the field of financial fragility. The text is divided in two parts; the first presents analytical developments of stochastic aggregation and macro-dynamics inference methods. The second part introduces macroeconomic models of financial fragility for complex systems populated by heterogeneous and interacting agents. The concepts of financial fragility and macroeconomic dynamics are explained in detail in separate chapters. The statistical physics approach is applied to explain theories of macroeconomic modelling and inference.
This introductory statistics textbook conveys the essential concepts and tools needed to develop and nurture statistical thinking. It presents descriptive, inductive and explorative statistical methods and guides the reader through the process of quantitative data analysis. In the experimental sciences and interdisciplinary research, data analysis has become an integral part of any scientific study. Issues such as judging the credibility of data, analyzing the data, evaluating the reliability of the obtained results and finally drawing the correct and appropriate conclusions from the results are vital. The text is primarily intended for undergraduate students in disciplines like business administration, the social sciences, medicine, politics, macroeconomics, etc. It features a wealth of examples, exercises and solutions with computer code in the statistical programming language R as well as supplementary material that will enable the reader to quickly adapt all methods to their own applications.
Business Statistics for Management and Economics is an application-oriented text providing students with a solid grounding in statistical theory and allowing them to make the most of data analysis techniques. Students learn through examples and applications of the most common statistical concepts and techniques used in business, economics and management.
This book provides a selection of pioneering papers or extracts ranging from Pascal (1654) to R.A. Fisher (1930). The authors' annotations put the articles in perspective for the modern reader. A special feature of the book is the large number of translations, nearly all made by the authors. The selected articles vary considerably in difficulty, some requiring only a basic understanding of statistical concepts, whereas others surprise by their early sophistication in "classical" statistics. There are several reasons for studying the history of statistics: intrinsic interest in how the field of statistics developed, learning from often brilliant ideas and not reinventing the wheel, and livening up general courses in statistics by reference to important contributors. Herbert A. David is Distinguished Professor Emeritus in the Department of Statistics, Iowa State University and served as Department Head from 1972 to 184. He was Editor of Biometric from 1967 to 1972 and President of Biometric Society for 1982-1983. His publications include books on Order Statistics (Wiley 1970, 1981) and The Method of Paired Comparisons (Griffin 1963, 1988). Apart from articles in these two areas he has written on statistical inference, experimental designs, competing risks, and the history of statistics. He received a Ph.D. in statistics from University College London in 1953. A.W.F. Edwards is Reader in Biometry in the University of Cambridge. He was President of the British Region of the Biometric Society in 1992-1994 and is Chairman of the Christiaan Huygens Committee for the History of Statistics of the International Statistical Institute. His publications include the books Likelihood (Cambridge University Press 1972, Johns Hopkins University Press 1992), Foundations of Mathematical Genetics (Cambridge University Press 1977, 2000), and Pascal's Arithmetical Triangle (Griffin 1987). He holds the degrees of Ph.D. and Sc.D. from Cambridge University.
In many applications of econometrics and economics, a large proportion of the questions of interest are identification. An economist may be interested in uncovering the true signal when the data could be very noisy, such as time-series spurious regression and weak instruments problems, to name a few. In this book, High-Dimensional Econometrics and Identification, we illustrate the true signal and, hence, identification can be recovered even with noisy data in high-dimensional data, e.g., large panels. High-dimensional data in econometrics is the rule rather than the exception. One of the tools to analyze large, high-dimensional data is the panel data model.High-Dimensional Econometrics and Identification grew out of research work on the identification and high-dimensional econometrics that we have collaborated on over the years, and it aims to provide an up-todate presentation of the issues of identification and high-dimensional econometrics, as well as insights into the use of these results in empirical studies. This book is designed for high-level graduate courses in econometrics and statistics, as well as used as a reference for researchers.
Digital Asset Valuation and Cyber Risk Measurement: Principles of Cybernomics is a book about the future of risk and the future of value. It examines the indispensable role of economic modeling in the future of digitization, thus providing industry professionals with the tools they need to optimize the management of financial risks associated with this megatrend. The book addresses three problem areas: the valuation of digital assets, measurement of risk exposures of digital valuables, and economic modeling for the management of such risks. Employing a pair of novel cyber risk measurement units, bitmort and hekla, the book covers areas of value, risk, control, and return, each of which are viewed from the perspective of entity (e.g., individual, organization, business), portfolio (e.g., industry sector, nation-state), and global ramifications. Establishing adequate, holistic, and statistically robust data points on the entity, portfolio, and global levels for the development of a cybernomics databank is essential for the resilience of our shared digital future. This book also argues existing economic value theories no longer apply to the digital era due to the unique characteristics of digital assets. It introduces six laws of digital theory of value, with the aim to adapt economic value theories to the digital and machine era.
A complete and up-to-date survey of microeconometric methods available in Stata, Microeconometrics Using Stata, Revised Edition is an outstanding introduction to microeconometrics and how to execute microeconometric research using Stata. It covers topics left out of most microeconometrics textbooks and omitted from basic introductions to Stata.
This revised edition has been updated to reflect the new features available in Stata 11 that are useful to microeconomists. Instead of using mfx and the user-written margeff commands, the authors employ the new margins command, emphasizing both marginal effects at the means and average marginal effects. They also replace the xi command with factor variables, which allow you to specify indicator variables and interaction effects. Along with several new examples, this edition presents the new gmm command for generalized method of moments and nonlinear instrumental-variables estimation. In addition, the chapter on maximum likelihood estimation incorporates enhancements made to ml in Stata 11.
Throughout the book, the authors use simulation methods to illustrate features of the estimators and tests described and provide an in-depth Stata example for each topic discussed. They also show how to use Stata s programming features to implement methods for which Stata does not have a specific command. The unique combination of topics, intuitive introductions to methods, and detailed illustrations of Stata examples make this book an invaluable, hands-on addition to the library of anyone who uses microeconometric methods."
Increasing concerns regarding the world's natural resources and sustainability continue to be a major issue for global development. As a result several political initiatives and strategies for green or resource-efficient growth both on national and international levels have been proposed. A core element of these initiatives is the promotion of an increase of resource or material productivity. This dissertation examines material productivity developments in the OECD and BRICS countries between 1980 and 2008. By applying the concept of convergence stemming from economic growth theory to material productivity the analysis provides insights into both aspects: material productivity developments in general as well potentials for accelerated improvements in material productivity which consequently may allow a reduction of material use globally. The results of the convergence analysis underline the importance of policy-making with regard to technology and innovation policy enabling the production of resource-efficient products and services as well as technology transfer and diffusion.
This book is about learning from data using the Generalized Additive Models for Location, Scale and Shape (GAMLSS). GAMLSS extends the Generalized Linear Models (GLMs) and Generalized Additive Models (GAMs) to accommodate large complex datasets, which are increasingly prevalent. In particular, the GAMLSS statistical framework enables flexible regression and smoothing models to be fitted to the data. The GAMLSS model assumes that the response variable has any parametric (continuous, discrete or mixed) distribution which might be heavy- or light-tailed, and positively or negatively skewed. In addition, all the parameters of the distribution (location, scale, shape) can be modelled as linear or smooth functions of explanatory variables. Key Features: Provides a broad overview of flexible regression and smoothing techniques to learn from data whilst also focusing on the practical application of methodology using GAMLSS software in R. Includes a comprehensive collection of real data examples, which reflect the range of problems addressed by GAMLSS models and provide a practical illustration of the process of using flexible GAMLSS models for statistical learning. R code integrated into the text for ease of understanding and replication. Supplemented by a website with code, data and extra materials. This book aims to help readers understand how to learn from data encountered in many fields. It will be useful for practitioners and researchers who wish to understand and use the GAMLSS models to learn from data and also for students who wish to learn GAMLSS through practical examples.
The original research papers collected in this volume continue the development of discrete choice analysis, of related structural models for analysis of choice behavior, and of the statistical theory used in inference on these models. Most papers in the volume are revised versions of ones presented at a 2005 conference in honor of Daniel L. McFadden, whose fundamental research made discrete choice analysis part of the fabric of modern economics.
You may like...
Standard Deviations - Flawed…
Gary Smith Paperback
Capital And Ideology
Thomas Piketty Hardcover (1)
Using Econometrics - Custom Unisa…
High-Dimensional Statistics - A…
Martin J Wainwright Hardcover
Statistics for Management and Economics
Gerald Keller Hardcover (2)
R1,496 Discovery Miles 14 960
Statistics for Business and Economics…
Paul Newbold, William Carlson, … Paperback R1,265 Discovery Miles 12 650
Basic mathematics for economics students…
D. Yu Paperback R241 Discovery Miles 2 410
Who Gets What - And Why - Understand the…
Alvin Roth Paperback (1)
The South African Informal Sector…
Frederick Fourie Paperback
Weapons of Math Destruction - How Big…
Cathy O'Neil Paperback (1)