Getting started#
The xdatasets
library enables users to effortlessly access a vast collection of earth observation datasets that are compatible with xarray
formats.
The library adopts an opinionated approach to data querying and caters to the specific needs of certain user groups, such as hydrologists, climate scientists, and engineers. One of the functionalities of xdatasets
is the ability to extract data at a specific location or within a designated region, such as a watershed or municipality, while also enabling spatial and temporal operations.
To use xdatasets
, users must employ a query. For instance, a straightforward query to extract the variables t2m
(2m temperature) and tp
(Total precipitation) from the era5_reanalysis_single_levels
dataset at two geographical positions (Montreal and Toronto) could be as follows:
query = {
"variables": {"era5_reanalysis_single_levels": ["t2m", "tp"]},
"space": {
"clip": "point", # bbox, point or polygon
"geometry": {'Montreal' : (45.508888, -73.561668),
'Toronto' : (43.651070, -79.347015)
}
}
}
An example of a more complex query would look like the one below.
Note
Don’t worry! Below, you’ll find additional examples that will assist in understanding each parameter in the query, as well as the possible combinations.
This query calls the same variables as above. However, instead of specifying geographical positions, a GeoPandas.DataFrame is used to provide features (such as shapefiles or geojson) for extracting data within each of them. Each polygon is identified using the unique identifier Station
, and a spatial average is computed within each one (aggregation: True)
. The dataset, initially at an hourly time step, is converted into a daily time step while applying one or more temporal aggregations
for each variable as prescribed in the query. xdatasets
ultimately returns the dataset for the specified date range and time zone.
query = {
"variables": {"era5_reanalysis_single_levels": ["t2m", "tp"]},
"space": {
"clip": "polygon", # bbox, point or polygon
"aggregation": True, # spatial average of the variables within each polygon
"geometry": gdf,
"unique_id": "Station" # unique column name in geodataframe
},
"time": {
"timestep": "D",
"aggregation": {"tp": np.nansum,
"t2m": [np.nanmax, np.nanmin]},
"start": '2000-01-01',
"end": '2020-05-31',
"timezone": 'America/Montreal',
},
}
Query climate datasets#
In order to use xdatasets
, you must import at least xdatasets
, pandas
, geopandas
, and numpy
. Additionally, we import pathlib
to interact with files.
[1]:
%load_ext autoreload
%autoreload 2
[2]:
import xdatasets as xd
import geopandas as gpd
import pandas as pd
import numpy as np
from pathlib import Path
Clip by points (sites)#
To begin with, we need to create a dictionary of sites and their corresponding geographical coordinates.
[3]:
sites = {
'Montreal' : (45.508888, -73.561668),
'New York': (40.730610, -73.935242),
'Miami': (25.761681, -80.191788)
}
We will then extract the t2m
(2m temperature) from the era5_reanalysis_single_levels
dataset for the designated sites. Afterward, we will convert the time step to daily and adjust the timezone to Eastern Time. Finally, we will limit the temporal interval to the timeframe between 2018 and 2020.
Before proceeding with this first query, let’s quickly outline the role of each parameter:
variables: A dictionary where datasets serve as keys and desired variables as values.
space: A dictionary that defines the necessary spatial operations to apply on user-supplied geographic features.
time: A dictionary that defines the necessary temporal operations to apply on the datasets
For more information on each parameter, consult the API documentation.
This is what the requested query looks like :
[4]:
query = {
"variables": {"era5_reanalysis_single_levels": ["t2m"]},
"space": {
"clip": "point", # bbox, point or polygon
"geometry": sites
},
"time": {
"timestep": "D", # http://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases
"aggregation": {"t2m": np.nanmean},
"start": '2018-01-01',
"end": '2020-12-31',
"timezone": 'America/Montreal',
},
}
xds = xd.Dataset(**query)
Processing t2m: 100%|██████████| 1/1 [00:00<00:00, 4.69it/s]
By accessing the data
attribute, you can view the data obtained from the query. It’s worth noting that the variable name t2m
has been updated to t2m_nanmean
to reflect the reduction operation (np.nanmean
) that was utilized to convert the time step from hourly to daily.
[5]:
xds.data
[5]:
<xarray.Dataset> Dimensions: (site: 3, time: 1096) Coordinates: lat (site) float32 45.5 40.75 25.75 lon (site) float32 -73.5 -74.0 -80.25 * site (site) <U8 'Montreal' 'New York' 'Miami' * time (time) datetime64[ns] 2018-01-01 2018-01-02 ... 2020-12-31 Data variables: t2m_nanmean (time, site) float32 250.0 262.4 294.2 ... 274.0 279.8 297.4 Attributes: institution: ECMWF source: Reanalysis title: ERA5 forecasts timezone: America/Montreal
[6]:
xds.data.hvplot(
title='Comparison of the 2-meter temperature (K) across three cities in North America from 2018 to 2020',
x="time",
y="t2m_nanmean",
grid=True,
width=750,
height=450,
by="site",
legend="top",
widget_location="bottom")
[6]:
Clip on polygons with no averaging in space#
Let’s first access certain polygon features, which can be in the form of shapefiles, geojson, or any other format compatible with geopandas
. In this example, we are using JSON
(geojson) files.
[7]:
bucket = Path('https://s3.us-east-2.wasabisys.com/watersheds-polygons/MELCC/json')
paths = [bucket.joinpath('023003/023003.json'),
bucket.joinpath('031101/031101.json'),
bucket.joinpath('040111/040111.json')]
Subsequently, all of the files can be opened and consolidated into a single geopandas.GeoDataFrame
object.
[8]:
gdf = pd.concat([gpd.read_file(path) for path in paths]).reset_index(drop=True)
gdf
[8]:
Station | Superficie | geometry | |
---|---|---|---|
0 | 023003 | 208.4591919813271 | POLYGON ((-70.82601 46.81658, -70.82728 46.815... |
1 | 031101 | 111.7131058782722 | POLYGON ((-73.98519 45.21072, -73.98795 45.209... |
2 | 040111 | 433.440893903503 | POLYGON ((-74.06645 46.02253, -74.06647 46.022... |
Let’s examine the geographic locations of the polygon features.
[9]:
gdf.hvplot(geo=True,
tiles='ESRI',
color='Station',
alpha=0.8,
width=750,
height=450,
legend='top',
hover_cols=['Station','Superficie'])
[9]:
The following query seeks the variables t2m
and tp
from the era5_reanalysis_single_levels
dataset, covering the period between January 1, 1959, and September 30, 1961, for the three polygons mentioned earlier. It is important to note that as aggregation
is set to False
, no spatial averaging will be conducted, and a mask (raster) will be returned for each polygon.
[10]:
query = {
"variables": {"era5_reanalysis_single_levels_dev": ["t2m", "tp"]},
"space": {
"clip": "polygon", # bbox, point or polygon
"aggregation": False, # spatial average of the variables within each polygon
"geometry": gdf,
"unique_id": "Station" # unique column name in geodataframe
},
"time": {
"start": '1959-01-01',
"end": '1962-07-31',
},
}
xds = xd.Dataset(**query)
3it [00:05, 1.90s/it]
By accessing the data
attribute, you can view the data obtained from the query. For each variable, the dimensions of time
, latitude
, longitude
, and Station
(the unique ID) are included. In addition, there is another variable called weights
that is returned. This variable specifies the weight that should be assigned to each pixel if spatial averaging is conducted over a mask (polygon).
[11]:
xds.data
[11]:
<xarray.Dataset> Dimensions: (latitude: 6, longitude: 5, time: 31392, Station: 3) Coordinates: * latitude (latitude) float64 45.0 45.25 46.0 46.25 46.5 46.75 * longitude (longitude) float64 -74.5 -74.25 -74.0 -71.0 -70.75 * time (time) datetime64[ns] 1959-01-01 ... 1962-07-31T23:00:00 * Station (Station) object '023003' '031101' '040111' Superficie (Station) object '208.4591919813271' ... '433.440893903503' Data variables: t2m (Station, time, latitude, longitude) float32 nan nan ... nan nan tp (Station, time, latitude, longitude) float32 nan nan ... nan nan weights (Station, latitude, longitude) float64 nan nan nan ... nan nan Attributes: Conventions: CF-1.6 history: 2022-11-10 02:03:41 GMT by grib_to_netcdf-2.25... pangeo-forge:inputs_hash: 48ccebfa8d853ed75a452921e072836a0c4203ef57e5e5... pangeo-forge:recipe_hash: cf36e651f60c5640ec6131d48e27b49c1d5cfa6bdc93da... pangeo-forge:version: 0.9.4
Weights are much easier to comprehend visually, so let’s examine the weights returned for the station 023003. Notice that when selecting a single feature (Station 023003 in this case), the shape of our spatial dimensions is reduced to a 2x2 pixel area (longitude x latitude) that encompasses the entire feature.
[12]:
station = '023003'
ds_station = xds.data.sel(Station=station)
ds_clipped = xds.bbox_clip(ds_station)
ds_clipped
[12]:
<xarray.Dataset> Dimensions: (time: 31392, latitude: 2, longitude: 2) Coordinates: * latitude (latitude) float64 46.5 46.75 * longitude (longitude) float64 -71.0 -70.75 * time (time) datetime64[ns] 1959-01-01 ... 1962-07-31T23:00:00 Station <U6 '023003' Superficie object '208.4591919813271' Data variables: t2m (time, latitude, longitude) float32 254.7 254.8 ... 291.7 291.2 tp (time, latitude, longitude) float32 nan nan ... 0.002735 weights (latitude, longitude) float64 7.449e-06 0.0001531 0.9079 0.09194 Attributes: Conventions: CF-1.6 history: 2022-11-10 02:03:41 GMT by grib_to_netcdf-2.25... pangeo-forge:inputs_hash: 48ccebfa8d853ed75a452921e072836a0c4203ef57e5e5... pangeo-forge:recipe_hash: cf36e651f60c5640ec6131d48e27b49c1d5cfa6bdc93da... pangeo-forge:version: 0.9.4
[13]:
(
(
ds_clipped.t2m.isel(time=0).hvplot(
title="The 2m temperature for pixels that intersect with the polygon on January 1, 1959",
tiles="ESRI",
geo=True,
alpha=0.6,
colormap="isolum",
width=750,
height=450,
)
* gdf[gdf.Station == station].hvplot(
geo=True,
width=750,
height=450,
legend="top",
hover_cols=["Station", "Superficie"],
)
)
+ ds_clipped.weights.hvplot(
title="The weights that should be assigned to each pixel when performing spatial averaging",
tiles="ESRI",
alpha=0.6,
colormap="isolum",
geo=True,
width=750,
height=450,
)
* gdf[gdf.Station == station].hvplot(
geo=True,
width=750,
height=450,
legend="top",
hover_cols=["Station", "Superficie"],
)
).cols(1)
[13]:
The two plots depicted above show the 2m temperature for each pixel that intersects with the polygon from Station 023003
and the corresponding weights to be applied to each pixel. In the lower plot, it is apparent that the majority of the polygon is situated in the upper-left pixel, which results in that pixel having a weight of approximately 91%. It is evident that the two lower pixels have very minimal intersection with the polygon, which results in their respective weights being nearly
zero (hover on the plot to verify the weights).
In various libraries, either all pixels that intersect with the geometries are kept, or only pixels with centers within the polygon are retained. However, as shown in the previous example, utilizing such methods can introduce significant biases in the final outcome. Indeed, keeping all four pixels intersecting with the polygon with equal weights when the temperature values in the lower pixels are roughly 2 degrees lower than that of the upper-left pixel would introduce significant biases. Therefore, utilizing weights is a more precise approach.
Clip on polygons with averaging in space#
The following query seeks the variables t2m
and tp
from the era5_reanalysis_single_levels
dataset, covering the period between January 1, 1979, and December 31, 2020, for the three polygons mentioned earlier. Note that when the aggregation
parameter is set to True
, spatial averaging takes place. In addition, the weighted mask (raster) described earlier will be applied to generate a time series for each polygon.
Additional steps are carried out in the process, including converting the original hourly time step to a daily time step. During this conversion, various temporal aggregations will be applied to each variable and a conversion to the local time zone will take place.
[14]:
query = {
"variables": {"era5_reanalysis_single_levels": ["t2m", "tp"]},
"space": {
"clip": "polygon", # bbox, point or polygon
"aggregation": True,
"geometry": gdf,
"unique_id": "Station"
},
"time": {
"timestep": "D", # http://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases
"aggregation": {"tp": [np.nansum],
"t2m": [np.nanmax, np.nanmin]},
"start": '1978-12-31',
"end": '2020-12-31',
"timezone": 'America/Montreal',
},
}
xds = xd.Dataset(**query)
3it [00:02, 1.18it/s]
Processing tp: 100%|██████████| 2/2 [00:07<00:00, 3.57s/it]
The resulting dataset can be explored in the data attribute :
[15]:
xds.data
[15]:
<xarray.Dataset> Dimensions: (Station: 3, time: 15342) Coordinates: longitude (Station) float64 -70.94 -74.14 -74.27 latitude (Station) float64 46.72 45.18 46.08 * Station (Station) object '023003' '031101' '040111' Superficie (Station) object '208.4591919813271' ... '433.440893903503' * time (time) datetime64[ns] 1978-12-31 1979-01-01 ... 2020-12-31 Data variables: t2m_nanmax (time, Station) float32 271.1 276.4 271.5 ... 273.7 276.1 272.7 t2m_nanmin (time, Station) float32 268.8 274.4 269.7 ... 269.0 271.3 266.3 tp_nansum (time, Station) float32 0.0 0.0 0.0 ... 0.002386 0.002362 Attributes: institution: ECMWF source: Reanalysis title: ERA5 forecasts timezone: America/Montreal
The next cell displays the daily 2m maximum and minimum temperature along with total precipitation for each polygon (station). Please note that for full interactivity, running the code in a Jupyter Notebook is necessary.
[16]:
(
xds.data.to_dataframe().hvplot(
x="time",
y=["t2m_nanmax", "t2m_nanmin"],
grid=True,
width=750,
height=450,
groupby="Station",
shared_axes=True,
legend="top",
widget_location="bottom",
)
+ xds.data.to_dataframe().hvplot(
x="time",
y=["tp_nansum"],
grid=True,
width=750,
height=450,
groupby="Station",
legend="top",
widget_location="bottom",
)
)
[16]: