This is a notebook from the live coding session by Dr Hugo Bowne-Anderson on April 10, 2020 via DataCamp.

Imports and data

Let's import the necessary packages from the SciPy stack and get the data.

In [1]:
# Import packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Set style & figures inline
sns.set()
%matplotlib inline
In [2]:
# Data urls
base_url = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/'
confirmed_cases_data_url = base_url + 'time_series_covid19_confirmed_global.csv'
death_cases_data_url = base_url + 'time_series_covid19_deaths_global.csv'
recovery_cases_data_url = base_url+ 'time_series_covid19_recovered_global.csv'
# Import datasets as pandas dataframes
raw_data_confirmed = pd.read_csv(confirmed_cases_data_url)
raw_data_deaths = pd.read_csv(death_cases_data_url)
raw_data_recovered = pd.read_csv(recovery_cases_data_url)

Confirmed cases of COVID-19

We'll first check out the confirmed cases data by looking at the head of the dataframe:

In [3]:
raw_data_confirmed.head(n=10)
Out[3]:
Province/StateCountry/RegionLatLong1/22/201/23/201/24/201/25/201/26/201/27/20...3/31/204/1/204/2/204/3/204/4/204/5/204/6/204/7/204/8/204/9/20
0NaNAfghanistan33.000065.0000000000...174237273281299349367423444484
1NaNAlbania41.153320.1683000000...243259277304333361377383400409
2NaNAlgeria28.03391.6596000000...7168479861171125113201423146815721666
3NaNAndorra42.50631.5218000000...376390428439466501525545564583
4NaNAngola-11.202717.8739000000...7888101416171919
5NaNAntigua and Barbuda17.0608-61.7964000000...77915151515191919
6NaNArgentina-38.4161-63.6167000000...1054105411331265145114511554162817151795
7NaNArmenia40.069145.0382000000...532571663736770822833853881921
8Australian Capital TerritoryAustralia-35.4735149.0124000000...808487919396969699100
9New South WalesAustralia-33.8688151.2093000034...2032218222982389249325802637268627342773

10 rows × 83 columns

Discuss: What do you see here? We can also see a lot about the data by using the .info() and .describe() dataframe methods:

In [4]:
raw_data_confirmed.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 263 entries, 0 to 262
Data columns (total 83 columns):
Province/State    82 non-null object
Country/Region    263 non-null object
Lat               263 non-null float64
Long              263 non-null float64
1/22/20           263 non-null int64
1/23/20           263 non-null int64
1/24/20           263 non-null int64
1/25/20           263 non-null int64
1/26/20           263 non-null int64
1/27/20           263 non-null int64
1/28/20           263 non-null int64
1/29/20           263 non-null int64
1/30/20           263 non-null int64
1/31/20           263 non-null int64
2/1/20            263 non-null int64
2/2/20            263 non-null int64
2/3/20            263 non-null int64
2/4/20            263 non-null int64
2/5/20            263 non-null int64
2/6/20            263 non-null int64
2/7/20            263 non-null int64
2/8/20            263 non-null int64
2/9/20            263 non-null int64
2/10/20           263 non-null int64
2/11/20           263 non-null int64
2/12/20           263 non-null int64
2/13/20           263 non-null int64
2/14/20           263 non-null int64
2/15/20           263 non-null int64
2/16/20           263 non-null int64
2/17/20           263 non-null int64
2/18/20           263 non-null int64
2/19/20           263 non-null int64
2/20/20           263 non-null int64
2/21/20           263 non-null int64
2/22/20           263 non-null int64
2/23/20           263 non-null int64
2/24/20           263 non-null int64
2/25/20           263 non-null int64
2/26/20           263 non-null int64
2/27/20           263 non-null int64
2/28/20           263 non-null int64
2/29/20           263 non-null int64
3/1/20            263 non-null int64
3/2/20            263 non-null int64
3/3/20            263 non-null int64
3/4/20            263 non-null int64
3/5/20            263 non-null int64
3/6/20            263 non-null int64
3/7/20            263 non-null int64
3/8/20            263 non-null int64
3/9/20            263 non-null int64
3/10/20           263 non-null int64
3/11/20           263 non-null int64
3/12/20           263 non-null int64
3/13/20           263 non-null int64
3/14/20           263 non-null int64
3/15/20           263 non-null int64
3/16/20           263 non-null int64
3/17/20           263 non-null int64
3/18/20           263 non-null int64
3/19/20           263 non-null int64
3/20/20           263 non-null int64
3/21/20           263 non-null int64
3/22/20           263 non-null int64
3/23/20           263 non-null int64
3/24/20           263 non-null int64
3/25/20           263 non-null int64
3/26/20           263 non-null int64
3/27/20           263 non-null int64
3/28/20           263 non-null int64
3/29/20           263 non-null int64
3/30/20           263 non-null int64
3/31/20           263 non-null int64
4/1/20            263 non-null int64
4/2/20            263 non-null int64
4/3/20            263 non-null int64
4/4/20            263 non-null int64
4/5/20            263 non-null int64
4/6/20            263 non-null int64
4/7/20            263 non-null int64
4/8/20            263 non-null int64
4/9/20            263 non-null int64
dtypes: float64(2), int64(79), object(2)
memory usage: 170.7+ KB
In [5]:
raw_data_confirmed.describe()
Out[5]:
LatLong1/22/201/23/201/24/201/25/201/26/201/27/201/28/201/29/20...3/31/204/1/204/2/204/3/204/4/204/5/204/6/204/7/204/8/204/9/20
count263.000000263.000000263.000000263.000000263.000000263.000000263.000000263.000000263.000000263.000000...263.000000263.000000263.000000263.000000263.000000263.000000263.000000263.000000263.000000263.000000
mean21.33924422.0681332.1102662.4866923.5779475.4524718.05323211.12927821.20912523.444867...3260.4068443546.0266163853.4828904166.9847914552.8821294836.9391635114.4524715422.4182515745.6425866065.969582
std24.77958570.78594927.43401527.53288834.27549847.70220766.66211089.815834220.427512221.769901...16274.71820117892.26961319747.17855121707.02668623984.07376625717.56127427517.45216829418.40191831466.35877733481.088534
min-51.796300-135.0000000.0000000.0000000.0000000.0000000.0000000.0000000.0000000.000000...0.0000000.0000000.0000000.0000000.0000000.0000000.0000000.0000000.0000000.000000
25%6.938500-21.0313000.0000000.0000000.0000000.0000000.0000000.0000000.0000000.000000...15.00000017.00000019.50000020.50000021.00000022.00000024.00000027.00000029.50000030.500000
50%23.63450020.1683000.0000000.0000000.0000000.0000000.0000000.0000000.0000000.000000...143.000000168.000000176.000000184.000000195.000000214.000000226.000000237.000000248.000000255.000000
75%41.17885079.5000000.0000000.0000000.0000000.0000000.0000000.0000000.0000000.000000...715.000000780.000000881.000000949.000000983.5000001020.0000001068.5000001135.5000001193.5000001235.500000
max71.706900178.065000444.000000444.000000549.000000761.0000001058.0000001423.0000003554.0000003554.000000...188172.000000213372.000000243762.000000275586.000000308853.000000337072.000000366667.000000396223.000000429052.000000461437.000000

8 rows × 81 columns

Number of confirmed cases by country

Look at the head (or tail) of our dataframe again and notice that each row is the data for a particular province or state of a given country:

In [6]:
raw_data_confirmed.head()
Out[6]:
Province/StateCountry/RegionLatLong1/22/201/23/201/24/201/25/201/26/201/27/20...3/31/204/1/204/2/204/3/204/4/204/5/204/6/204/7/204/8/204/9/20
0NaNAfghanistan33.000065.0000000000...174237273281299349367423444484
1NaNAlbania41.153320.1683000000...243259277304333361377383400409
2NaNAlgeria28.03391.6596000000...7168479861171125113201423146815721666
3NaNAndorra42.50631.5218000000...376390428439466501525545564583
4NaNAngola-11.202717.8739000000...7888101416171919

5 rows × 83 columns

We want the numbers for each country, though. So the way to think about this is, for each country, we want to take all the rows (regions/provinces) that correspond to that country and add up the numbers for each. To put this in data-analytic-speak, we want to group by the country column and sum up all the values for the other columns.

This is a common pattern in data analysis that we humans have been using for centuries. Interestingly, it was only formalized in 2011 by Hadley Wickham in his seminal paper The Split-Apply-Combine Strategy for Data Analysis. The pattern we're discussing is now called Split-Apply-Combine and, in the case at hand, we

  • Split the data into new datasets for each country,
  • Apply the function of "sum" for each new dataset (that is, we add/sum up the values for each column) to sum over territories/provinces/states for each country, and
  • Combine these datasets into a new dataframe.

The pandas API has the groupby method, which allows us to do this.

Side note: For more on split-apply-combine and pandas check out my post here.

In [7]:
# Group by region (also drop 'Lat', 'Long' as it doesn't make sense to sum them here)
confirmed_country = raw_data_confirmed.groupby(by=['Country/Region']).sum().drop(['Lat', 'Long'], axis=1)
confirmed_country.head()
Out[7]:
1/22/201/23/201/24/201/25/201/26/201/27/201/28/201/29/201/30/201/31/20...3/31/204/1/204/2/204/3/204/4/204/5/204/6/204/7/204/8/204/9/20
Country/Region
Afghanistan0000000000...174237273281299349367423444484
Albania0000000000...243259277304333361377383400409
Algeria0000000000...7168479861171125113201423146815721666
Andorra0000000000...376390428439466501525545564583
Angola0000000000...7888101416171919

5 rows × 79 columns

So each row of our new dataframe confirmed_country is a time series of the number of confirmed cases for each country. Cool! Now a dataframe has an associated object called an Index, which is essentially a set of unique indentifiers for each row. Let's check out the index of confirmed_country:

In [8]:
confirmed_country.index
Out[8]:
Index(['Afghanistan', 'Albania', 'Algeria', 'Andorra', 'Angola',
       'Antigua and Barbuda', 'Argentina', 'Armenia', 'Australia', 'Austria',
       ...
       'United Arab Emirates', 'United Kingdom', 'Uruguay', 'Uzbekistan',
       'Venezuela', 'Vietnam', 'West Bank and Gaza', 'Western Sahara',
       'Zambia', 'Zimbabwe'],
      dtype='object', name='Country/Region', length=184)

It's indexed by Country/Region. That's all good but if we index by date instead, it will allow us to produce some visualizations almost immediately. This is a nice aspect of the pandas API: you can make basic visualizations with it and, if your index consists of DateTimes, it knows that you're plotting time series and plays nicely with them. To make the index the set of dates, notice that the column names are the dates. To turn column names into the index, we essentially want to make the columns the rows (and the rows the columns). This corresponds to taking the transpose of the dataframe:

In [9]:
confirmed_country = confirmed_country.transpose()
confirmed_country.head()
Out[9]:
Country/RegionAfghanistanAlbaniaAlgeriaAndorraAngolaAntigua and BarbudaArgentinaArmeniaAustraliaAustria...United Arab EmiratesUnited KingdomUruguayUzbekistanVenezuelaVietnamWest Bank and GazaWestern SaharaZambiaZimbabwe
1/22/200000000000...0000000000
1/23/200000000000...0000020000
1/24/200000000000...0000020000
1/25/200000000000...0000020000
1/26/200000000040...0000020000

5 rows × 184 columns

Let's have a look at our index to see whether it actually consists of DateTimes:

In [10]:
confirmed_country.index
Out[10]:
Index(['1/22/20', '1/23/20', '1/24/20', '1/25/20', '1/26/20', '1/27/20',
       '1/28/20', '1/29/20', '1/30/20', '1/31/20', '2/1/20', '2/2/20',
       '2/3/20', '2/4/20', '2/5/20', '2/6/20', '2/7/20', '2/8/20', '2/9/20',
       '2/10/20', '2/11/20', '2/12/20', '2/13/20', '2/14/20', '2/15/20',
       '2/16/20', '2/17/20', '2/18/20', '2/19/20', '2/20/20', '2/21/20',
       '2/22/20', '2/23/20', '2/24/20', '2/25/20', '2/26/20', '2/27/20',
       '2/28/20', '2/29/20', '3/1/20', '3/2/20', '3/3/20', '3/4/20', '3/5/20',
       '3/6/20', '3/7/20', '3/8/20', '3/9/20', '3/10/20', '3/11/20', '3/12/20',
       '3/13/20', '3/14/20', '3/15/20', '3/16/20', '3/17/20', '3/18/20',
       '3/19/20', '3/20/20', '3/21/20', '3/22/20', '3/23/20', '3/24/20',
       '3/25/20', '3/26/20', '3/27/20', '3/28/20', '3/29/20', '3/30/20',
       '3/31/20', '4/1/20', '4/2/20', '4/3/20', '4/4/20', '4/5/20', '4/6/20',
       '4/7/20', '4/8/20', '4/9/20'],
      dtype='object')

Note that dtype='object'which means that these are strings, not DateTimes. We can use pandas to turn it into a DateTimeIndex:

In [11]:
# Set index as DateTimeIndex
datetime_index = pd.DatetimeIndex(confirmed_country.index)
confirmed_country.set_index(datetime_index)
# Check out index
confirmed_country.index
Out[11]:
Index(['1/22/20', '1/23/20', '1/24/20', '1/25/20', '1/26/20', '1/27/20',
       '1/28/20', '1/29/20', '1/30/20', '1/31/20', '2/1/20', '2/2/20',
       '2/3/20', '2/4/20', '2/5/20', '2/6/20', '2/7/20', '2/8/20', '2/9/20',
       '2/10/20', '2/11/20', '2/12/20', '2/13/20', '2/14/20', '2/15/20',
       '2/16/20', '2/17/20', '2/18/20', '2/19/20', '2/20/20', '2/21/20',
       '2/22/20', '2/23/20', '2/24/20', '2/25/20', '2/26/20', '2/27/20',
       '2/28/20', '2/29/20', '3/1/20', '3/2/20', '3/3/20', '3/4/20', '3/5/20',
       '3/6/20', '3/7/20', '3/8/20', '3/9/20', '3/10/20', '3/11/20', '3/12/20',
       '3/13/20', '3/14/20', '3/15/20', '3/16/20', '3/17/20', '3/18/20',
       '3/19/20', '3/20/20', '3/21/20', '3/22/20', '3/23/20', '3/24/20',
       '3/25/20', '3/26/20', '3/27/20', '3/28/20', '3/29/20', '3/30/20',
       '3/31/20', '4/1/20', '4/2/20', '4/3/20', '4/4/20', '4/5/20', '4/6/20',
       '4/7/20', '4/8/20', '4/9/20'],
      dtype='object')

Now we have a DateTimeIndex and Countries for columns, we can use the dataframe plotting method to visualize the time series of confirmed number of cases by country. As there are so many coutries, we'll plot a subset of them:

Plotting confirmed cases by country

In [12]:
# Plot time series of several countries of interest
poi = ['China', 'US', 'Italy', 'France', 'Spain', 'Australia']
confirmed_country[poi].plot(figsize=(20, 10), linewidth=5, colormap='brg', fontsize=20)
Out[12]:
<matplotlib.axes._subplots.AxesSubplot at 0x1cfe5630160>

Let's label our axes and give the figure a title. We'll also thin the line and add points for the data so that the sampling is evident in our plots:

In [13]:
# Plot time series of several countries of interest
confirmed_country[poi].plot(figsize=(20,10), linewidth=2, marker='.', colormap='brg', fontsize=20)
plt.xlabel('Date', fontsize=20);
plt.ylabel('Reported Confirmed cases count', fontsize=20);
plt.title('Reported Confirmed Cases Time Series', fontsize=20);

Let's do this again but make the y-axis logarithmic:

In [14]:
# Plot time series of several countries of interest
confirmed_country[poi].plot(figsize=(20,10), linewidth=2, marker='.', fontsize=20, logy=True)
plt.xlabel('Date', fontsize=20);
plt.ylabel('Reported Confirmed cases count', fontsize=20);
plt.title('Reported Confirmed Cases Time Series', fontsize=20);

Discuss: Why do we plot with a log y-axis? How do we interpret the log plot? Key points:

  • If a variable takes on values over several orders of magnitude (e.g. in the 10s, 100s, and 1000s), we use a log axis so that the data is not all crammed into a small region of the visualization.
  • If a curve is approximately linear on a log axis, then its approximately exponential growth and the gradient/slope of the line tells us about the exponent.

ESSENTIAL POINT: A logarithm scale is good for visualization BUT remember, in the thoughtful words of Justin Bois, "on the ground, in the hospitals, we live with the linear scale. The flattening of the US curve, for example is more evident on the log scale, but the growth is still rapid on a linear scale, which is what we feel."

Summary: We've

  • looked at the JHU data repository and imported the data,
  • looked at the dataset containing the number of reported confirmed cases for each region,
  • wrangled the data to look at the number of reported confirmed cases by country,
  • plotted the number of reported confirmed cases by country (both log and semi-log),
  • discussed why log plots are important for visualization and that we need to remember that we, as humans, families, communities, and society, experience COVID-19 linearly.

Number of reported deaths

As we did above for raw_data_confirmed, let's check out the head and the info of the raw_data_deaths dataframe:

In [15]:
raw_data_deaths.head()
Out[15]:
Province/StateCountry/RegionLatLong1/22/201/23/201/24/201/25/201/26/201/27/20...3/31/204/1/204/2/204/3/204/4/204/5/204/6/204/7/204/8/204/9/20
0NaNAfghanistan33.000065.0000000000...44667711141415
1NaNAlbania41.153320.1683000000...15151617202021222223
2NaNAlgeria28.03391.6596000000...445886105130152173193205235
3NaNAndorra42.50631.5218000000...12141516171821222325
4NaNAngola-11.202717.8739000000...2222222222

5 rows × 83 columns

In [16]:
raw_data_deaths.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 263 entries, 0 to 262
Data columns (total 83 columns):
Province/State    82 non-null object
Country/Region    263 non-null object
Lat               263 non-null float64
Long              263 non-null float64
1/22/20           263 non-null int64
1/23/20           263 non-null int64
1/24/20           263 non-null int64
1/25/20           263 non-null int64
1/26/20           263 non-null int64
1/27/20           263 non-null int64
1/28/20           263 non-null int64
1/29/20           263 non-null int64
1/30/20           263 non-null int64
1/31/20           263 non-null int64
2/1/20            263 non-null int64
2/2/20            263 non-null int64
2/3/20            263 non-null int64
2/4/20            263 non-null int64
2/5/20            263 non-null int64
2/6/20            263 non-null int64
2/7/20            263 non-null int64
2/8/20            263 non-null int64
2/9/20            263 non-null int64
2/10/20           263 non-null int64
2/11/20           263 non-null int64
2/12/20           263 non-null int64
2/13/20           263 non-null int64
2/14/20           263 non-null int64
2/15/20           263 non-null int64
2/16/20           263 non-null int64
2/17/20           263 non-null int64
2/18/20           263 non-null int64
2/19/20           263 non-null int64
2/20/20           263 non-null int64
2/21/20           263 non-null int64
2/22/20           263 non-null int64
2/23/20           263 non-null int64
2/24/20           263 non-null int64
2/25/20           263 non-null int64
2/26/20           263 non-null int64
2/27/20           263 non-null int64
2/28/20           263 non-null int64
2/29/20           263 non-null int64
3/1/20            263 non-null int64
3/2/20            263 non-null int64
3/3/20            263 non-null int64
3/4/20            263 non-null int64
3/5/20            263 non-null int64
3/6/20            263 non-null int64
3/7/20            263 non-null int64
3/8/20            263 non-null int64
3/9/20            263 non-null int64
3/10/20           263 non-null int64
3/11/20           263 non-null int64
3/12/20           263 non-null int64
3/13/20           263 non-null int64
3/14/20           263 non-null int64
3/15/20           263 non-null int64
3/16/20           263 non-null int64
3/17/20           263 non-null int64
3/18/20           263 non-null int64
3/19/20           263 non-null int64
3/20/20           263 non-null int64
3/21/20           263 non-null int64
3/22/20           263 non-null int64
3/23/20           263 non-null int64
3/24/20           263 non-null int64
3/25/20           263 non-null int64
3/26/20           263 non-null int64
3/27/20           263 non-null int64
3/28/20           263 non-null int64
3/29/20           263 non-null int64
3/30/20           263 non-null int64
3/31/20           263 non-null int64
4/1/20            263 non-null int64
4/2/20            263 non-null int64
4/3/20            263 non-null int64
4/4/20            263 non-null int64
4/5/20            263 non-null int64
4/6/20            263 non-null int64
4/7/20            263 non-null int64
4/8/20            263 non-null int64
4/9/20            263 non-null int64
dtypes: float64(2), int64(79), object(2)
memory usage: 170.7+ KB

It seems to be structured similarly to raw_data_confirmed. I have checked it out in detail and can confirm that it is! This is good data design as it means that users like can explore, munge, and visualize it in a fashion analogous to the above. Can you remember what we did? We

  • Split-Apply-Combined it (and dropped 'Lat'/'Long'),
  • Transposed it,
  • Made the index a DateTimeIndex, and
  • Visualized it (linear and semi-log).

Let's now do the first three steps here for raw_data_deaths and see how we go:

Number of reported deaths by country

In [17]:
# Split-Apply-Combine
deaths_country = raw_data_deaths.groupby(by=['Country/Region']).sum().drop(['Lat', 'Long'], axis=1)

# Transpose
deaths_country = deaths_country.transpose()

# Set index as DateTimeIndex
datetime_index = pd.DatetimeIndex(deaths_country.index)
deaths_country.set_index(datetime_index)

# Check out head
deaths_country.head()
Out[17]:
Country/RegionAfghanistanAlbaniaAlgeriaAndorraAngolaAntigua and BarbudaArgentinaArmeniaAustraliaAustria...United Arab EmiratesUnited KingdomUruguayUzbekistanVenezuelaVietnamWest Bank and GazaWestern SaharaZambiaZimbabwe
1/22/200000000000...0000000000
1/23/200000000000...0000000000
1/24/200000000000...0000000000
1/25/200000000000...0000000000
1/26/200000000000...0000000000

5 rows × 184 columns

In [18]:
# Check out the index
deaths_country.index
Out[18]:
Index(['1/22/20', '1/23/20', '1/24/20', '1/25/20', '1/26/20', '1/27/20',
       '1/28/20', '1/29/20', '1/30/20', '1/31/20', '2/1/20', '2/2/20',
       '2/3/20', '2/4/20', '2/5/20', '2/6/20', '2/7/20', '2/8/20', '2/9/20',
       '2/10/20', '2/11/20', '2/12/20', '2/13/20', '2/14/20', '2/15/20',
       '2/16/20', '2/17/20', '2/18/20', '2/19/20', '2/20/20', '2/21/20',
       '2/22/20', '2/23/20', '2/24/20', '2/25/20', '2/26/20', '2/27/20',
       '2/28/20', '2/29/20', '3/1/20', '3/2/20', '3/3/20', '3/4/20', '3/5/20',
       '3/6/20', '3/7/20', '3/8/20', '3/9/20', '3/10/20', '3/11/20', '3/12/20',
       '3/13/20', '3/14/20', '3/15/20', '3/16/20', '3/17/20', '3/18/20',
       '3/19/20', '3/20/20', '3/21/20', '3/22/20', '3/23/20', '3/24/20',
       '3/25/20', '3/26/20', '3/27/20', '3/28/20', '3/29/20', '3/30/20',
       '3/31/20', '4/1/20', '4/2/20', '4/3/20', '4/4/20', '4/5/20', '4/6/20',
       '4/7/20', '4/8/20', '4/9/20'],
      dtype='object')

Plotting number of reported deaths by country

Let's now visualize the number of reported deaths:

In [19]:
# Plot time series of several countries of interest
deaths_country[poi].plot(figsize=(20, 10), linewidth=2, marker='.', colormap='brg', fontsize=20)
plt.xlabel('Date', fontsize=20);
plt.ylabel('Number of Reported Deaths', fontsize=20);
plt.title('Reported Deaths Time Series', fontsize=20);

Now on a semi-log plot:

In [20]:
# Plot time series of several countries of interest
deaths_country[poi].plot(figsize=(20,10), linewidth=2, marker='.', fontsize=20, colormap='brg', logy=True)
plt.xlabel('Date', fontsize=20);
plt.ylabel('Number of Reported Deaths', fontsize=20);
plt.title('Reported Deaths Time Series', fontsize=20);

Aligning growth curves to start with day of number of known deaths ≥ 25

To compare what's happening in different countries, we can align each country's growth curves to all start on the day when the number of known deaths ≥ 25, such as reported in the first figure here. To achieve this, first off, let's set set all values less than 25 to NaN so that the associated data points don't get plotted at all when we visualize the data:

In [21]:
# Loop over columns & set values < 25 to None
for col in deaths_country.columns:
    deaths_country.loc[(deaths_country[col] < 25), col] = None

# Check out tail
deaths_country.tail()
Out[21]:
Country/RegionAfghanistanAlbaniaAlgeriaAndorraAngolaAntigua and BarbudaArgentinaArmeniaAustraliaAustria...United Arab EmiratesUnited KingdomUruguayUzbekistanVenezuelaVietnamWest Bank and GazaWestern SaharaZambiaZimbabwe
4/5/20NaNNaN152.0NaNNaNNaN44.0NaN35.0204.0...NaN4943.0NaNNaNNaNNaNNaNNaNNaNNaN
4/6/20NaNNaN173.0NaNNaNNaN48.0NaN40.0220.0...NaN5385.0NaNNaNNaNNaNNaNNaNNaNNaN
4/7/20NaNNaN193.0NaNNaNNaN56.0NaN45.0243.0...NaN6171.0NaNNaNNaNNaNNaNNaNNaNNaN
4/8/20NaNNaN205.0NaNNaNNaN63.0NaN50.0273.0...NaN7111.0NaNNaNNaNNaNNaNNaNNaNNaN
4/9/20NaNNaN235.025.0NaNNaN72.0NaN51.0295.0...NaN7993.0NaNNaNNaNNaNNaNNaNNaNNaN

5 rows × 184 columns

Now let's plot as above to make sure we see what we think we should see:

In [22]:
# Plot time series of several countries of interest
poi = ['China', 'US', 'Italy', 'France', 'Australia']
deaths_country[poi].plot(figsize=(20,10), linewidth=2, marker='.', colormap='brg', fontsize=20)
plt.xlabel('Date', fontsize=20);
plt.ylabel('Number of Reported Deaths', fontsize=20);
plt.title('Reported Deaths Time Series', fontsize=20);

The countries that have seen less than 25 total deaths will have columns of all NaNs now so let's drop these and then see how many columns we have left:

In [23]:
# Drop columns that are all NaNs (i.e. countries that haven't yet reached 25 deaths)
deaths_country.dropna(axis=1, how='all', inplace=True)
deaths_country.info()
<class 'pandas.core.frame.DataFrame'>
Index: 79 entries, 1/22/20 to 4/9/20
Data columns (total 60 columns):
Algeria                   15 non-null float64
Andorra                   1 non-null float64
Argentina                 10 non-null float64
Australia                 7 non-null float64
Austria                   17 non-null float64
Belgium                   21 non-null float64
Bosnia and Herzegovina    4 non-null float64
Brazil                    19 non-null float64
Canada                    18 non-null float64
Chile                     6 non-null float64
China                     77 non-null float64
Colombia                  7 non-null float64
Czechia                   10 non-null float64
Denmark                   17 non-null float64
Dominican Republic        13 non-null float64
Ecuador                   17 non-null float64
Egypt                     14 non-null float64
Finland                   6 non-null float64
France                    31 non-null float64
Germany                   23 non-null float64
Greece                    15 non-null float64
Hungary                   7 non-null float64
India                     12 non-null float64
Indonesia                 22 non-null float64
Iran                      43 non-null float64
Iraq                      17 non-null float64
Ireland                   13 non-null float64
Israel                    9 non-null float64
Italy                     41 non-null float64
Japan                     25 non-null float64
Korea, South              39 non-null float64
Luxembourg                9 non-null float64
Malaysia                  14 non-null float64
Mexico                    10 non-null float64
Moldova                   2 non-null float64
Morocco                   13 non-null float64
Netherlands               24 non-null float64
North Macedonia           3 non-null float64
Norway                    12 non-null float64
Pakistan                  10 non-null float64
Panama                    10 non-null float64
Peru                      10 non-null float64
Philippines               19 non-null float64
Poland                    11 non-null float64
Portugal                  17 non-null float64
Romania                   14 non-null float64
Russia                    8 non-null float64
San Marino                11 non-null float64
Saudi Arabia              7 non-null float64
Serbia                    9 non-null float64
Slovenia                  5 non-null float64
Spain                     32 non-null float64
Sweden                    18 non-null float64
Switzerland               24 non-null float64
Thailand                  4 non-null float64
Tunisia                   1 non-null float64
Turkey                    19 non-null float64
US                        31 non-null float64
Ukraine                   7 non-null float64
United Kingdom            25 non-null float64
dtypes: float64(60)
memory usage: 37.6+ KB

As we're going to align the countries from the day they first had at least 25 deaths, we won't need the DateTimeIndex. In fact, we won't need the date at all. So we can

  • Reset the Index, which will give us an ordinal index (which turns the date into a regular column) and
  • Drop the date column (which will be called 'index) after the reset.
In [24]:
# drop index, sort date column
deaths_country_drop = deaths_country.reset_index().drop(['index'], axis=1)
deaths_country_drop.head()
Out[24]:
Country/RegionAlgeriaAndorraArgentinaAustraliaAustriaBelgiumBosnia and HerzegovinaBrazilCanadaChile...SloveniaSpainSwedenSwitzerlandThailandTunisiaTurkeyUSUkraineUnited Kingdom
0NaNNaNNaNNaNNaNNaNNaNNaNNaNNaN...NaNNaNNaNNaNNaNNaNNaNNaNNaNNaN
1NaNNaNNaNNaNNaNNaNNaNNaNNaNNaN...NaNNaNNaNNaNNaNNaNNaNNaNNaNNaN
2NaNNaNNaNNaNNaNNaNNaNNaNNaNNaN...NaNNaNNaNNaNNaNNaNNaNNaNNaNNaN
3NaNNaNNaNNaNNaNNaNNaNNaNNaNNaN...NaNNaNNaNNaNNaNNaNNaNNaNNaNNaN
4NaNNaNNaNNaNNaNNaNNaNNaNNaNNaN...NaNNaNNaNNaNNaNNaNNaNNaNNaNNaN

5 rows × 60 columns

Now it's time to shift each column so that the first entry is the first NaN value that it contains! To do this, we can use the shift() method on each column. How much do we shift each column, though? The magnitude of the shift is given by how many NaNs there are at the start of the column, which we can retrieve using the first_valid_index() method on the column but we want to shift up, which is negative in direction (by convention and perhaps intuition). SO let's do it.

In [25]:
# shift
for col in deaths_country_drop.columns:
    deaths_country_drop[col] = deaths_country_drop[col].shift(-deaths_country_drop[col].first_valid_index())
# check out head
deaths_country_drop.head()
Out[25]:
Country/RegionAlgeriaAndorraArgentinaAustraliaAustriaBelgiumBosnia and HerzegovinaBrazilCanadaChile...SloveniaSpainSwedenSwitzerlandThailandTunisiaTurkeyUSUkraineUnited Kingdom
025.025.027.028.028.037.029.025.025.027.0...28.028.025.027.026.025.030.028.027.056.0
126.0NaN28.030.030.067.033.034.026.034.0...30.035.036.028.027.0NaN37.036.032.056.0
229.0NaN36.035.049.075.034.046.030.037.0...36.054.062.041.030.0NaN44.040.037.072.0
331.0NaN39.040.058.088.035.059.038.043.0...40.055.077.054.032.0NaN59.047.038.0138.0
435.0NaN43.045.068.0122.0NaN77.054.048.0...43.0133.0105.075.0NaNNaN75.054.045.0178.0

5 rows × 60 columns

Side note: instead of looping over columns, we could have applied a lambda function to the columns of the dataframe, as follows:

In [26]:
# shift using lambda function
#deaths_country = deaths_country.apply(lambda x: x.shift(-x.first_valid_index()))

Now we get to plot our time series, first with linear axes, then semi-log:

In [27]:
# Plot time series 
ax = deaths_country_drop.plot(figsize=(20,10), linewidth=2, marker='.', fontsize=20)
ax.legend(ncol=3, loc='upper right')
plt.xlabel('Days', fontsize=20);
plt.ylabel('Number of Reported Deaths', fontsize=20);
plt.title('Total reported coronavirus deaths for places with at least 25 deaths', fontsize=20);
In [28]:
# Plot semi log time series 
ax = deaths_country_drop.plot(figsize=(20,10), linewidth=2, marker='.', fontsize=20, logy=True)
ax.legend(ncol=3, loc='upper right')
plt.xlabel('Days', fontsize=20);
plt.ylabel('Deaths Patients count', fontsize=20);
plt.title('Total reported coronavirus deaths for places with at least 25 deaths', fontsize=20);

Note: although we have managed to plot what we wanted, the above plots are challenging to retrieve any meaningful information from. There are too many growth curves so that it's very crowded and too many colours look the same so it's difficult to tell which country is which from the legend. Below, we'll plot less curves and further down in the notebook we'll use the python package Altair to introduce interactivity into the plot in order to deal with this challenge.

In [29]:
# Plot semi log time series 
ax = deaths_country_drop.plot(figsize=(20,10), linewidth=2, marker='.', fontsize=20, logy=True)
ax.legend(ncol=3, loc='upper right')
plt.xlabel('Days', fontsize=20);
plt.ylabel('Deaths Patients count', fontsize=20);
plt.title('Total reported coronavirus deaths for places with at least 25 deaths', fontsize=20);

Summary: We've

  • looked at the dataset containing the number of reported deaths for each region,
  • wrangled the data to look at the number of reported deaths by country,
  • plotted the number of reported deaths by country (both log and semi-log),
  • aligned growth curves to start with day of number of known deaths ≥ 25.

Plotting number of recovered people

The third dataset in the Hopkins repository is the number of recovered. We want to do similar data wrangling as in the two cases above so we could copy and paste our code again but, if you're writing the same code three times, it's likely time to write a function.

In [30]:
# Function for grouping countries by region
def group_by_country(raw_data):
    """Returns data for countries indexed by date"""
    # Group by
    data = raw_data.groupby(by='Country/Region').sum().drop(['Lat', 'Long'], axis=1)
    # Transpose
    data = data.transpose()
    # Set index as DateTimeIndex
    datetime_index = pd.DatetimeIndex(data.index)
    data.set_index(datetime_index, inplace=True)
    return data
In [31]:
# Function to align growth curves
def align_curves(data, min_val):
    """Align growth curves  to start on the day when the number of known deaths = min_val"""
    # Loop over columns & set values < min_val to None
    for col in data.columns:
        data.loc[(data[col] < min_val), col] = None
    # Drop columns with all NaNs
    data.dropna(axis=1, how='all', inplace=True)
    # Reset index, drop date
    data = data.reset_index().drop(['index'], axis=1)
    # Shift each column to begin with first valid index
    for col in data.columns:
        data[col] = data[col].shift(-data[col].first_valid_index())
    return data
In [32]:
# Function to plot time series
def plot_time_series(df, plot_title, x_label, y_label, logy=False):
    """Plot time series and make looks a bit nice"""
    ax = df.plot(figsize=(20,10), linewidth=2, marker='.', fontsize=20, logy=logy)
    ax.legend(ncol=3, loc='lower right')
    plt.xlabel(x_label, fontsize=20);
    plt.ylabel(y_label, fontsize=20);
    plt.title(plot_title, fontsize=20);

For a sanity check, let's see these functions at work on the 'number of deaths' data:

In [33]:
deaths_country_drop = group_by_country(raw_data_deaths)
deaths_country_drop = align_curves(deaths_country_drop, min_val=25)
plot_time_series(deaths_country_drop, 'Number of Reported Deaths', 'Days', 'Reported Deaths by Country', logy=True)

Now let's check use our functions to group, wrangle, and plot the recovered patients data:

In [34]:
# group by country and check out tail
recovered_country = group_by_country(raw_data_recovered)
recovered_country.tail()
Out[34]:
Country/RegionAfghanistanAlbaniaAlgeriaAndorraAngolaAntigua and BarbudaArgentinaArmeniaAustraliaAustria...United Arab EmiratesUnited KingdomUruguayUzbekistanVenezuelaVietnamWest Bank and GazaWestern SaharaZambiaZimbabwe
2020-04-0515104902620280577572998...1442299330529025030
2020-04-06181169031203256210803463...16728710430659524050
2020-04-071813111339203388710804046...186325150306512342070
2020-04-0829154237522035811410804512...239345150306512644070
2020-04-0932165347582036513814725240...2683591923884128440240

5 rows × 184 columns

In [35]:
# align curves and check out head
recovered_country_drop = align_curves(recovered_country, min_val=25)
recovered_country_drop.head()
Out[35]:
Country/RegionAfghanistanAlbaniaAlgeriaAndorraArgentinaArmeniaAustraliaAustriaAzerbaijanBahrain...TurkeyUSUkraineUnited Arab EmiratesUnited KingdomUruguayUzbekistanVenezuelaVietnamWest Bank and Gaza
029.031.032.026.052.028.026.0112.026.035.0...26.0105.025.026.053.041.025.031.025.025.0
132.031.032.031.052.030.026.0225.026.035.0...26.0121.028.031.067.041.025.039.055.0NaN
2NaN33.032.039.063.030.026.0225.026.044.0...42.0147.028.031.067.062.025.039.058.042.0
3NaN44.065.052.072.030.088.0479.026.044.0...70.0176.028.038.067.068.030.039.063.044.0
4NaN52.065.058.072.030.088.0636.032.060.0...105.0178.035.038.067.093.030.039.075.044.0

5 rows × 105 columns

Plot time series:

In [36]:
plot_time_series(recovered_country_drop, 'Recovered Patients Time Series', 'Days', 'Recovered Patients count')
In [37]:
plot_time_series(recovered_country_drop, 'Recovered Patients Time Series', 'Days', 'Recovered Patients count', True)

Note: once again, the above plots are challenging to retrieve any meaningful information from. There are too many growth curves so that it's very crowded and too many colours look the same so it's difficult to tell which country is which from the legend. Let's plot less curves and in the next section we'll use the python package Altair to introduce interactivity into such a plot in order to deal with this challenge.

In [38]:
plot_time_series(recovered_country_drop[poi], 'Recovered Patients Time Series', 'Days', 'Recovered Patients count', True)

Summary: We've

  • looked at the dataset containing the number of reported recoveries for each region,
  • written function for grouping, wrangling, and plotting the data,
  • grouped, wrangled, and plotted the data for the number of reported recoveries.

Interactive plots with altair

We're now going to build some interactive data visualizations. I was recently inspired by this one in the NYTimes, a chart of confirmed number of deaths by country for places with at least 25 deaths, similar to ours above, but with informative hover tools. This one is also interesting.

We're going to use a tool called Altair. I like Altair for several reasons, including precisely what they state on their website:

With Altair, you can spend more time understanding your data and its meaning. Altair’s API is simple, friendly and consistent and built on top of the powerful Vega-Lite visualization grammar. This elegant simplicity produces beautiful and effective visualizations with a minimal amount of code.

Before jumping into Altair, let's reshape our deaths_country dataset. Notice that it's currently in wide data format, with a column for each country and a row for each "day" (where day 1 is the first day with over 25 confirmed deaths). This worked with the pandas plotting API for reasons discussed above.

In [39]:
# Look at head
deaths_country_drop.head()
Out[39]:
Country/RegionAlgeriaAndorraArgentinaAustraliaAustriaBelgiumBosnia and HerzegovinaBrazilCanadaChile...SloveniaSpainSwedenSwitzerlandThailandTunisiaTurkeyUSUkraineUnited Kingdom
025.025.027.028.028.037.029.025.025.027.0...28.028.025.027.026.025.030.028.027.056.0
126.0NaN28.030.030.067.033.034.026.034.0...30.035.036.028.027.0NaN37.036.032.056.0
229.0NaN36.035.049.075.034.046.030.037.0...36.054.062.041.030.0NaN44.040.037.072.0
331.0NaN39.040.058.088.035.059.038.043.0...40.055.077.054.032.0NaN59.047.038.0138.0
435.0NaN43.045.068.0122.0NaN77.054.048.0...43.0133.0105.075.0NaNNaN75.054.045.0178.0

5 rows × 60 columns

For Altair, we'll want to convert the data into long data format. What this will do essentially have a row for each country/day pair so our columns will be 'Day', 'Country', and number of 'Deaths'. We do this using the dataframe method .melt() as follows:

In [40]:
# create long data for deaths
deaths_long = deaths_country_drop.reset_index().melt(id_vars='index', value_name='Deaths').rename(columns={ 'index': 'Day' })
deaths_long.head()
Out[40]:
DayCountry/RegionDeaths
00Algeria25.0
11Algeria26.0
22Algeria29.0
33Algeria31.0
44Algeria35.0

We'll see the power of having long data when using Altair. Such transformations have been performed for a long time, however it wasn't until 2014 that Hadley Wickham formalized the language in his paper Tidy Data. Note that Wickham prefers to avoid the terms long and wide because, in his words, 'they are imprecise'. I generally agree but for our purposes here of giving the flavour, they suffice.

Now having transformed our data, let's import Altair and get a sense of its API.

In [41]:
import altair as alt

# altair plot 
alt.Chart(deaths_long).mark_line().encode(
    x='Day', 
    y='Deaths', 
    color='Country/Region'
)
Out[41]:

It is nice to be able to build such an informative and elegant chart in four lines of code (which is also elegant). And, looking at the simplicity of the code we just wrote, we can see why it was great to have long data: a column for each variable allowed us to explicitly and easily tell Altair what we wanted on each axis and what we wanted for the colour.

As the Altair documentation (which is great, by the way!) states,

The key idea is that you are declaring links between data columns and visual encoding channels, such as the x-axis, y-axis, color, etc. The rest of the plot details are handled automatically. Building on this declarative plotting idea, a surprising range of simple to sophisticated plots and visualizations can be created using a relatively concise grammar.

We can now customize the code to thicken the line width, to alter the opacity, and to make the chart larger:

In [42]:
# altair plot 
alt.Chart(deaths_long).mark_line(strokeWidth=4, opacity=0.7).encode(
    x='Day',
    y='Deaths',
    color='Country/Region'
).properties(
    width=800,
    height=650
)
Out[42]:

We can also add a log y-axis. To do this, The long-form, we express the types using the long-form alt.X('Day',...), which is, in the words of the Altair documentation

useful when doing more fine-tuned adjustments to the encoding, such as binning, axis and scale properties, or more.

We'll also now add a hover tooltip so that, when we hover our cursor over any point on any of the lines, it will tell us the 'Country', the 'Day', and the number of 'Deaths'.

In [43]:
# altair plot 
alt.Chart(deaths_long).mark_line(strokeWidth=4, opacity=0.7).encode(
    x=alt.X('Day'),
    y=alt.Y('Deaths', scale=alt.Scale(type='log')),
    color='Country/Region',
    tooltip=['Country/Region', 'Day', 'Deaths']
).properties(
    width=800,
    height=650
)
Out[43]:

It's great that we could add that useful hover tooltip with one line of code tooltip=['Country/Region', 'Day','Deaths'], particularly as it adds such information rich interaction to the chart. One useful aspect of the NYTimes chart was that, when you hovered over a particular curve, it made it stand out against the other. We're going to do something similar here: in the resulting chart, when you click on a curve, the others turn grey.

Note: When first attempting to build this chart, I discovered here that "multiple conditional values in one encoding are not allowed by the Vega-Lite spec," which is what Altair uses. For this reason, we build the chart, then an overlay, and then combine them.

In [44]:
# Selection tool
selection = alt.selection_single(fields=['Country/Region'])
# Color change when clicked
color = alt.condition(selection,
                     alt.Color('Country/Region:N'),
                     alt.value('lightgray'))


# Base altair plot 
base = alt.Chart(deaths_long).mark_line(strokeWidth=4, opacity=0.7).encode(
    x=alt.X('Day'),
    y=alt.Y('Deaths', scale=alt.Scale(type='log')),
    color='Country/Region',
    tooltip=['Country/Region', 'Day','Deaths']
).properties(
    width=800,
    height=650
)

# Chart
chart = base.encode(
    color=alt.condition(selection, 'Country/Region:N', alt.value('lightgray'))
).add_selection(
    selection
)

# Overlay
overlay = base.encode(
    color='Country/Region',
  opacity=alt.value(0.5),
  tooltip=['Country/Region:N', 'Name:N']
).transform_filter(
  selection
)

# Sum em up!
chart + overlay
Out[44]:

It's not super easy to line up the legend with the curves on the chart so let's put the labels on the chart itself. Thanks to Jake Vanderplas for this suggestion, and for the code.

In [45]:
# drop NaNs
deaths_long = deaths_long.dropna()

# Selection tool
selection = alt.selection_single(fields=['Country/Region'])
# Color change when clicked
color = alt.condition(selection,
                    alt.Color('Country/Region:N'),
                    alt.value('lightgray'))


# Base altair plot 
base = alt.Chart(deaths_long).mark_line(strokeWidth=4, opacity=0.7).encode(
    x=alt.X('Day'),
    y=alt.Y('Deaths', scale=alt.Scale(type='log')),
    color=alt.Color('Country/Region', legend=None),
).properties(
    width=800,
    height=650
)

# Chart
chart = base.encode(
  color=alt.condition(selection, 'Country/Region:N', alt.value('lightgray'))
).add_selection(
  selection
)

# Overlay
overlay = base.encode(
  color='Country/Region',
  opacity=alt.value(0.5),
  tooltip=['Country/Region:N', 'Name:N']
).transform_filter(
  selection
)

# Text labels
text = base.mark_text(
    align='left',
    dx=5,
    size=10
).encode(
    x=alt.X('Day', aggregate='max',  axis=alt.Axis(title='Day')),
    y=alt.Y('Deaths', aggregate={'argmax': 'Day'}, axis=alt.Axis(title='Reported Deaths')),
    text='Country/Region',  
).transform_filter(
    selection
)

# Sum em up!
chart + overlay + text
Out[45]:

Summary: We've

  • melted the data into long format,
  • used Altair to make interactive plots of increasing richness,
  • admired the elegance & simplicity of the Altair API and the visualizations produced.

That's all for the time being. I'd be interested to see how you all can make these charts more information rich and comprehensible. I encourage you to raise ideas in issues on the issue tracker in this github repository and then to make pull requests. A couple of ideas are

  • Adding lines to the above chart that show curves for deaths doubling each X days, as in the first chart here,
  • Figuring out a way to make the chart less crowded with names by perhaps only showing 10 of them.