Inferential analysis is never a completely linear process. It involves many iterations of cleaning data, describing data, visualizing data, and modeling data until you are confident about the process and the results.
Example diagram of the iterative loops in the process of data analysis:
The first steps in this process - simply describing your data accurately and thoroughly, and showing some of the core features of the data through visualization.
I personally never trust a regression model until I know a lot about the data that is going into the model. Outliers, skewed distribtuions, and non-linear relationships can wreak havoc on inferential models. So until I feel comfortable that it is correlations and not specification driving the results in the model I will keep exploring the data and testing various assumptions about key relationships in the data.
Describing the dependent variable is especially important because it is necessary to understand what you are trying to explain. In this case after doing some analysis presented in the tutorial I noted that the lower range of growth of Median Home Values is -100% (which makes sense since it would be hard to lose more than 100% of your value), and the upper tail tapers off around 200%, although there are a handful of tracts that have growth rates above 500%, which seems unrealistic and likely not meaningful. I also noted that the mean is larger than the median by a significant margin (almost 10 percentage points) and the data is skewed right. This should alert you that outliers might be a problem, and some transformation of the dependent variable might be advisable.
More importantly, describing and visualizing data helps catch problems that occurred during the data cleaning and data wrangling steps. The descriptives help you identify any odd values in the data that seem unrealistic or relationships that seem off. If you are surprised by something you see in the data it is always a good idea to check the data steps to make sure error was not introduced through the code.
This lab asks you to explore your new longitudinal census dataset to become better acquanted with the outcomes in the study.
Start by reading the introduction the a recent paper published by Baum-Snow and Hartley:
Accounting for Central Neighborhood Change, 1980-2010 [ PDF ]
“Neighborhoods within 2 km of most central business districts of U.S. metropolitan areas experienced population declines from 1980 to 2000 but have rebounded markedly since 2000 at greater pace than would be expected from simple mean reversion. Statistical decompositions reveal that 1980-2000 departures of residents without a college degree (of all races) generated most of the declines while the return of college educated whites and the stabilization of neighborhood choices by less educated whites promoted most of the post-2000 rebound. The rise of childless households and the increase in the share of the population with a college degree, con- ditional on race, also promoted 1980-2010 increases in central area population and educational composition of residents, respectively. Estimation of a neighborhood choice model shows that changes in choices to live in central neighborhoods primarily reflect a shifting balance between rising home prices and valuations of local amenities, though 1980-2000 central area population declines also reflect deteriorating nearby labor market opportunities for low skilled whites.”
It was somewhat surprising to see such high growth rates of home value in the US data from 2000 to 2010 (25% to 35% on average), especially considering we have adjusted for inflation and that is immediately following the housing crisis.
For Part 01 of the lab, reproduce the descriptive analysis demonstrated in the tutorial but do it for periods 1990 to 2000 instead of the 2000 to 2010 period used in the tutorial. Note that we are only using urban tracts rather than all urban and rural tracts.
How do changes in home value differ between the 1990-2000 period and 2000-2010?
If you have time, what do the authors suggest would predict fall in central city home values between 1990 and 2000?
You were asked to come up with a way to conceptualize gentrification.
This week you will create a new variable in your dataset that allows you to operationalize gentrification and examine its prevelance in the data. How many census tracts are candidates (start out at a low income level with high diversity)? And of those how many have transitioned into advanced stages of gentrification?
Provide an explanation and justification of the way you measure gentrification in the data.
Check out the Urban Displacement Project’s METHODOLOGY for some ideas.
Histograms and scatterplots help us understand the statistical properties and relationships between variables in our dataset. Due to the nature of cities, these relationships are all shaped by their location in the city. Geography matters a great deal in urban policy, so it should not be ignored.
Simple simple choropleth maps can be extremely helpful in understanding relationships in the data.
Revisit one of the labs from CPP 529 and prepare a Dorling Cartogram shapefile for your project. You can select one or more cities of your choice (you can use the same city if you like).
Follow the instructions for saving your cartogram as a GeoJSON file. These files are nice because they can be stored on GitHub and read directly into R. If you create it right the first time you can use it over and over after that.
Please create the following choropleth maps and report your main findings:
Home Values
Gentrification
Do you find any meaningful patterns in where gentrification occurs?
Record your work in an RMD file where you can document your code and responses to the questions. Knit your RMD file and include your rendered HTML file.
Note that this lab will become one chapter in your final report. You will save time by drafting the lab as if it is an external report chapter rather than a regular lab.
Login to Canvas at http://canvas.asu.edu and navigate to the assignments tab in the course repository. Upload your zipped folder to the appropriate lab submission link.
Remember to:
See Google’s R Style Guide for examples.
If you are having problems with your RMD file, visit the RMD File Styles and Knitting Tips manual.
Note that when you knit a file, it starts from a blank slate. You might have packages loaded or datasets active on your local machine, so you can run code chunks fine. But when you knit you might get errors that functions cannot be located or datasets don’t exist. Be sure that you have included chunks to load these in your RMD file.
Your RMD file will not knit if you have errors in your code. If you get stuck on a question, just add eval=F
to the code chunk and it will be ignored when you knit your file. That way I can give you credit for attempting the question and provide guidance on fixing the problem.