NAHB: Data Staging in Python

The National Association of Home Builders or NAHB is an organization that represents housing interests within the United States. This includes affordable housing, home building techniques and methods, and the promotion of home ownership.

In conjunction with these goals and functions, NAHB releases data about housing in the United States. The data tracks various metrics including Median Income, Interest Rates, and NAHB’s very own Housing Opportunity Index or HOI.

NAHB calculates the HOI by looking comparing home prices to income in a given area. For example, if income rises in Dallas-Ft. Worth and home prices remain the same, then HOI increases. Alternatively, if income remains the same while home prices rise, HOI falls.

I wanted to visualize this dataset in Tableau, however, when I opened the spreadsheet, it was in a format that is incompatible with Tableau.

The raw NAHB spreadsheet.

While the format is acceptable for basic spreadsheet analysis, it lacks the proper long-form layout required for analysis in Tableau. Using Python, I wanted to convert this spreadsheet into a CSV with the following features:

  • Variable name column headers (Median Income, FIP, etc.)
  • One row per quarter per metropolitan statistical area
  • Proper datetime and numerical formats

My Python script begins by importing the proper dependencies. In this instance, I need pandas, numpy, and datetime.

import pandas as pd
import numpy as np
import datetime as dt

Next, I use Pandas to read the Excel file, remove unneeded rows, and melt the pivoted table format. From there, I renamed two columns for better readability.

df = pd.read_excel('housingdata.xls')
df = df.loc[df.NAME.notnull()]

df1 = df[~df['flag'].isin([1,8])]

dfmelt = pd.melt(df1, id_vars=['msa_fip','NAME','flag'])

dfmelt.rename(columns={'variable':'date',
                       'NAME':'variable'},
              inplace=True)

I use the melted dataset to create a new column called index which is used to pivot the data in a format that is readable by Tableau.

dfmelt['msa_fip'] = dfmelt['msa_fip'].apply(str)

dfmelt['index'] = dfmelt['msa_fip'] + dfmelt['date']

dfpivot = dfmelt.pivot(index='index', 
                       columns='variable', 
                       values='value').reset_index()

Next, I create new columns from slices of existing columns to create a proper datetime column.

dfpivot['FIP'] = dfpivot['index'].str[:5]
dfpivot['Quarter'] = dfpivot['index'].str[6:7]
dfpivot['Year'] = dfpivot['index'].str[8:]

dfpivot['Year'] = dfpivot['Year'].apply(int)

dfpivot['Year'] = np.where(dfpivot['Year']>80,
                           1900 + dfpivot['Year'],
                           2000 + dfpivot['Year'])

dfpivot['Year'] = dfpivot['Year'].apply(str)

dfpivot['Quarter'] = dfpivot['Quarter'].replace({'1':'01-01',
                                              '2':'03-01',
                                              '3':'06-01',
                                              '4':'09-01'})

dfpivot['Date'] = (dfpivot['Year'] + '-' + dfpivot['Quarter'])

dfpivot['Date'] = pd.to_datetime(dfpivot['Date'])

Lastly, I collect the unique names of all metropolitan statistical areas from the initial dataframe. I left join this new smaller dataset called names to the newly formatted dataset.

names = df[['NAME','msa_fip']].loc[df.flag == 1].drop_duplicates()
names['msa_fip'] = names['msa_fip'].apply(str)
dfpivot['FIP'] = dfpivot['FIP'].apply(str)
names.set_index('msa_fip', inplace=True)
dfpivot.set_index('FIP', inplace=True)

output = (dfpivot.join(names,
                       how='left')).reset_index()

output['FIP'] = output['index']
output.drop(columns='index', inplace=True)

This final dataframe can now be saved as a CSV or Excel file for further analysis in Tableau.

The NAHB data after staging it in Python.

Now that the data has been staged and saved as a CSV, we can conduct deeper analysis. Using Tableau Public, I created two visualizations about the Housing Opportunity Index.

The first visualization highlights changes to the index across the country on average for all metropolitan-statistical-areas.

A time series plot of the HOI over time.

The second visualization is a scatterplot that compares the median home price in a metropolitan-statistical-area to the HOI in that area. As one may suspect, home prices inversely correlate with housing opportunity. In other words, greater affordability improves housing opportunity.

A scatterplot comparing the HOI to median home prices.

Writing a Machine Learning Classifier: K-Nearest Neighbors

Machine learning is a subset of Artificial Intelligence. Despite the elusive title, machine learning simply refers to developing methods for computers to learn from new information and make predictions.

Imagine you wish to write a program that can identify the species of a flower based on a few measurements. You could write a series of if-statements that guide a computer to classify the flower’s species. However, there are two key issues with this approach.

First, if a human is explicitly writing the rules for which the computer classifies the flower’s species, there is likely to be bias induced from a human’s inability to understand all of the data required to classify a flower. Second, if new data is introduced, the rules (if-statements) must be rewritten, taking valuable time.

For these reasons, a new solution is needed. This solution must adapt to new data, require no explicit writing of rules, and be computationally efficient. In other words, we need a program that can learn.

K-Nearest Neighbors

When discussing machine learning, there is a myriad of methods and models to choose from. Some of these models blur the lines of classical statistics including forms of regression while others replicate the structure of the human brain using neurons.

To solve our classification problem, we will be using a model titled K-Nearest Neighbors. This model, as the name suggests, uses the assumption that if a new data point is added to a model, it is likely that the new data point is of the same type as it’s nearest already classified neighbor.

A visual representation of a K-Nearest Neighbor Classifier

In the example above, the x-axis denotes a flower’s petal width while the y-axis denotes the petal’s length. You can see that blue flowers have smaller petal lengths than red flowers but larger petal widths (and vice versa). Let’s say you add a new point (shown in yellow). What type of flower is the yellow point? Red or blue?

According to the model, it is a red flower. This is because it is physically closest to a data point that is already classified as red.

Writing a KNN Classifier

Using Python 3 and Jupyter Notebooks, I have written my own KNN Classifier. While pre-made KNN classifiers are widely available, writing your own provides greater knowledge of how the model works. I begin by importing the necessary Python packages for this program.

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%xmode Minimal
%matplotlib inline

Once the packages are imported, I load a famous machine learning dataset called Iris. It includes four columns which provide measurements about 150 Iris flowers for the model to learn and a fifth column that includes the classification of the flower.

path = 'iris.csv'
testing_size = .2
data = pd.read_csv(path, header = None)

From here, I determine the classifications stored as text in the Pandas DataFrame. In the following code, I create a dictionary of the unique values with their associated number and apply the classification to the DataFrame column.

classes = pd.Series(data.iloc[:,-1].unique())
i = 1
dictionary = {}
for item in classes:
    dictionary.update( {item : i} )
    i = i + 1
data = data.replace(dictionary)

Next, I convert the DataFrame into a NumPy array and randomly shuffle the array. I also slice the resulting array into 80% training data (for the model to learn) and 20% testing data (to test how accurate the model is).

array = np.array(data)
np.random.shuffle(array)
test_num = round((np.size(array,0))*testing_size)
test = array[:(test_num),:]
train = array[(test_num + 1):,:] 

After preparing the data, I use NumPy array broadcasting to determine the distances of each training data point from each testing data point. I then use NumPy functions to locate the index of the closest point from each training point and capture the classification it predicted.

input_array = test[:,:-1]
x = []
for row in input_array:
    distances = (((row - train[:,:-1])**2)**.5).sum(axis=1)
    min_index = np.argmin(distances)
    classification = train[min_index,-1]
    x.append(classification)
    predict = np.array(x)[:,np.newaxis]

Finally, I combine the predicted classification with the actual classification of the testing data to determine the accuracy of the model.

output = np.hstack((test,predict))
correct = np.count_nonzero(output[:,-1] == output[:,-2])
total = np.size(output[:,-1])
accuracy = round(((correct/total)*100),2)

The classifier takes (on average) 1 millisecond to run through the data and the model is always 90% accurate (or greater depending on how the NumPy array was randomized).

While the model developed here is not nearly as optimized as Scikit-Learn (especially for larger datasets), it does provide insight as to how a relatively simple classification model is developed and tested. More importantly, it reveals that machine learning, while very clever, is nothing more than mathematics.