Taylor Diagrams and high resolution plots of land use and elevation.

This is an overview for producing these following plots.

Taylor Diagram

The Taylor diagram graphically illustrates how well a model represents the variability of the observations it is attempting to predict, in terms of correlation, standard deviation and RMSE.

This code was adapted from the python file in this GitHub repository to suit our needs, credit goes to them.

This involves the writing of two python functions, one for generating circles on a graph and one that pulls all the data together, draws the appropriate circles and produces the plot. These are shown in the code blocks below.

def drawCircle(radius, xcenter=0, ycenter=0, maxradius=np.inf,style="--", color="k", lw=1, label=""):
    angles = np.linspace(-np.pi / 2, np.pi / 2, 361)
    x = np.sin(angles) * radius + xcenter
    y = np.cos(angles) * radius + ycenter

  # Only keep points within the circle
    I = np.where(x ** 2 + y ** 2 < maxradius ** 2)[0]
    if(len(I) == 0):
    x = x[I]
    y = y[I]
    plt.plot(x, y, style, color=color, lw=lw, zorder=-100, label=label)
    plt.plot(x, -y, style, color=color, lw=lw, zorder=-100)
def makeTaylor(corrs,stds,stdobs,circ_LIM=1.0,loc='location'):
    '''This function takes a list of correlations and standard deviations 
       corresponding to 6 different model outputs in this case (should alter 
       the code for your application), along with the standard deviation
       of the observations. circ_LIM determines the radius of the outer
       circle in the plot and loc is a name identifier (like station name)
       for plot titles and filenames for saving plots

    plot_colors = ['blue','purple','brown','orange','green','red']
    ang = np.arccos(corrs)
    x = stds * np.cos(ang)
    y = stds * np.sin(ang)
    fig, axs = plt.subplots(1, 1,figsize=(8,8))

    plt.scatter(x[:3], y[:3], marker='o',s=120, color=plot_colors[:3],edgecolor='k', label=plot_cols)
    plt.scatter(x[3:], y[3:], marker='X',s=180, color=plot_colors[3:],edgecolor='k', label=plot_cols)

    xticks = np.arange(0,circ_LIM,0.1)

    drawCircle(stdobs, xcenter=0, ycenter=0, style="--", color="cyan", lw=3)

    plt.xticks(xticks[xticks >= 0])
    plt.text(np.sin(np.pi / 4) * circ_LIM, np.cos(np.pi / 4) * circ_LIM,
        "Correlation", rotation=-45,horizontalalignment="center", verticalalignment="bottom")

    plt.plot(stdobs, 0, 's-', color='cyan', label="Observation", mew=2, clip_on=False)

    corrs = [0, 0.5, 0.8, 0.9, 0.95,
    for i in range(0, len(corrs)):
        angl = np.arccos(corrs[i]) 
        xl = np.cos(angl) * circ_LIM
        yl = np.sin(angl) * circ_LIM
        plt.plot([0, xl], [0, yl], 'k--')
        plt.text(xl, yl, str(corrs[i]), verticalalignment="bottom")

    xticks = plt.xticks()[0]

    Rs = np.linspace(0.1, 1.0, 10)
    for R in Rs:
        if(R > 0):
            drawCircle(R, xcenter=stdobs, ycenter=0, maxradius=circ_LIM, style="-", color="gray", lw=3)
            xc = np.sin(-np.pi / 4) * R + stdobs
            yc = np.cos(np.pi / 4) * R
            if(xc >0):
                plt.text(xc, yc, str(R)[:3], horizontalalignment="right",

    for X in plt.xticks()[0]:
        if(X <= circ_LIM):
            drawCircle(X, style=":")
    drawCircle(circ_LIM, style="-", lw=3)



    plt.savefig('plots/'+loc.replace(' ','_')+'_TaylorWind',dpi=300)

This will produce a plot like this:

Corine Land use data

The Corine land use dataset can be downloaded from the EPA website. This data comes in the form of a .shp file which I had not encountered before. A quick google suggested that the library geopandas would allow me to analyse the data. The data are comprised of 18882 rows × 8 columns. The important column is “Class_Desc” which gives the names of the land use classifiers.

import geopandas as gpd
import matplotlib.pyplot as plt

cr = gpd.read_file('/CLC18_IE_ITM/CLC18_IE_ITM.shp')


This produced a map of the ROI shown below.

The colouring represents each of the 35 land use categories in the dataset.

Looking into the data further, it becomes apparent that the data is mapped in the Irish Transverse Mercator projection on an easting and northing grid. It is possible to zoom in on certain areas by indexing the data appropriately, and you can get lat-lon coordinates converted using this handy tool from OSI. An example below zooming in on Kerry and dropping pins (black crosses) at two location in Lerrig and Dromnacurra in the north of the county.

import geopandas as gpd
import matplotlib.pyplot as plt

cr = gpd.read_file('CLC18_IE_ITM/CLC18_IE_ITM.shp')

fig, ax = plt.subplots(1, 1,figsize=(10,10))


#Markers, with values converted from OSI tool

#Arbitrary plot limits


Digital Elevation Map (DEM)

One option to download high resolution elevation data is outlined here in this handy tutorial. You will need to install the following libraries.

$ conda create -n gdal_test
$ conda activate gdal_test
$ conda install gdal
$ conda install elevation

These allow you to both download and plot DEM data. The follow should be entered in the command line to download your data (obviously change your bounding box to suit your application). I have chosen to focus on Kerry (again).

$ !eio clip -o Kerry-30m-DEM.tif --bounds -10.6 51. -8.9 53. 

This will download the .tif that you have named in your command above. Now you can use gdal within python to visualise the data.

from osgeo import gdal
import numpy as np
import matplotlib 
import matplotlib.pyplot as plt

filename = "Kerry-30m-DEM.tif"
gdal_data = gdal.Open(filename)
gdal_band = gdal_data.GetRasterBand(1)
nodataval = gdal_band.GetNoDataValue()

# convert to a numpy array
data_array = gdal_data.ReadAsArray().astype(np.float)

# replace missing values if necessary
if np.any(data_array == nodataval):
    data_array[data_array == nodataval] = np.nan

#Plot out data with Matplotlib's 'contour'
fig = plt.figure(figsize = (12, 8))
ax = fig.add_subplot(111)

## I reversed the data below using [-1::-1] because the data was 
## initially plotting upside down! This fixed it.
plt.contourf(data_array[-1::-1,:], cmap = "viridis", 
            levels = list(range(0, 1050, 50)))

plt.title("Elevation Contours over Kerry")
cbar = plt.colorbar()

Kerry does not have the same terrain complexity as Mt Shasta in the tutorial, so remember to adjust your contouring levels accordingly.

These data are mapped on a “World Geodetic System 1984” projection.


June 2019 Conferences: WESC, ICEM

I attended two conferences in June 2019; WESC in UCC, Cork and ICEM in DTU, Copenhagen. Here are some points from these conferences.


I presented a talk titled “WRF planetary boundary layer schemes for wind forecasting” in the “Mesoscale” session as part of Theme 1: “Wind resource, turbulence and wakes” at WESC in UCC. The other talks in the session included the following.

  • Dr. Martin Doerenkaemper presented work involved in the production of the New European Wind Atlas (NEWA). He outlined computing hardware challenges of the project, WRF setup testing and workflow planning.
  • Dr. Dries Allaerts presented on assimilation methods involving mesoscale-to-microscale coupling.
  • Prof S.C. Pryor presented an analysis of 2 different wind farm parameterization, Fitch and EWP, and varying model resolution. These were run for an area over Iowa, USA. It was found that EWP produced 1.5-2.1% higher capacity factors compared to Fitch, which could impact wind farm planning. Further investigations needed to determine which scheme is closer to reality.
  • Erkan Yilmaz presented work looking at WRF skill at forecasting wind in complex terrain in Central Turkey. Analysed the pros and cons of nearest point grid selection and bi-linear interpolation for better forecast results.


I presented a talk titled “An investigation of WRF PBL schemes for renewable energy forecasting in Ireland” in the “Forecasting for power system applications: wind models” at ICEM2019 in DTU.

  • Sue Ellen Haupt presented about work on mesoscale-to-microscale coupling at NCAR for wind energy applications. Discussed the issue of “terra incognita”, the area between O(1km) and O(100m) between mesoscale NWP and LES. Said that upper limit of terra incognita is boundary layer height.
  • James Wilzcak presented work as part of WFIP2, discussing the observations collected as part of the field campaign and how these led to model development within WRF as a result, e.g. looking at cold pools leading to sustained weak winds east of mountains in Washington state. WRF repository for their updated to MYNN on github.
  • Sven-Erik Gryning presented work using WRF to downscale GFS forecast data to predict wind ramping at different forecast horizons at FINO3 platform. Forecasts were evaluated using correlation coefficient and comparison of histograms. At 10 min lead time, WRF performed poorly, becomes skillful at 4 hours lead time before reducing for longer lead times. Results shown for wind speed and wind direction.
  • Jared Lee presented work on a project for renewable energy forecasting in Kuwait using WRF-Solar-Wind and DiCast (a multi-model forecast system using MOS to determine adaptive weighting of different models using a 90 day training period). Also using a “cubist” machine learning method.

Journal Club: 11-06-2019

This paper studied a case of extreme ramping that occurred in central USA in July 2011. The passage of a cold front, along with large instability triggered the formation of a mesoscale convective system. This resulted in strong gusts from downdrafts which caused extensive damage to the Buffalo Ridge wind farm.

Wind farm damage

This paper performance of different WRF PBL and microphysics schemes at representing this event. ERA-Interim renalysis data was downscaled for this event using WRF with 3 domains at 9km, 3km and 1km. 3 PBL schemes (YSU, MYJ and Shin-Hong) and 2 microphysics schemes (Ferrier-Aligo and WDM6) were tested. The Ferrier-Aligo is a more simplistic scheme with less classes of hydrometeor types compared to WDM6. The WRF data was analysed for the two innermost domains (3km and 1km).

Max wind speed from observation stations in “analysis region” (top) and the corresponding maximum WRF wind speed in the “analysis region”.

It was found that the highest resolution domain and most complex microphysics scheme (WDM6) produced the strongest wind speeds. Although the observed wind speeds did not pick up these strong wind speeds (top plot), apart from the black dots which lies outside the “analysis region” to the east. It was found that the high resolution (d03) with Ferrier-Aligo (FA) microphysics performed similarly to the coaser resolution (d02) and WDM6 (W6) for 10m winds. But when looking at the extremes of the profile, W6-d02 produces stronger winds aloft within the turbine layer than fa-d03.

Provides some possible methods for analysing high-res case studies.


EGU 2019

Here are some of the interesting points I saw during EGU 2019 in Vienna.

Monday: NWP, DA and Ensembles (AS1.1)

  • Florian Pappenberger outlined plans for future work at ECMWF. Cycle 46R1 will be introduced in June 2019. Which will include the ENS being initialized from 50 member ensemble DA system. There will be web presentation on 46R1 on the 15th and 16th of May.
  • Pall August Porarinsson presented work on the use of 750m resolution runs of HARMONIE for forecasting Iceland. Compared to the operational 2.5km resolution model, the 750m forecasts offered better forecasts in areas of complex topography (such as fjords) but produced a worse temperature forecast.

Atmospheric Boundary Layer (AS2.2)

  • Domingo Munoz-Esparza presented on work using LES simulation to forecast turbulence for UAVs (aka drones). In an effort to make these simulation fast enough to be operational they have developed GPU based LES code called FastEddy.
  • Clara Garcia-Sanchez presented work using WRF with the Fitch turbine wake scheme to estimate losses from hypothetical windfarms in central USA. WRF was run at 3km as a resource assessment tool. It was found that as wind farms get larger and larger (beyond realistic scales) the energy density asymtotically reduces towards 1 W/m^2.

Convection Permitting modelling (CL5.04/AS1.28)

  • Daniel Argüeso presented work on using WRF simulations to forecast rainfall over the Maritime Continent. He ran WRF at a variety of resolutions: 32, 16, 8, 4 and 2km with and without a cu_physics scheme. It was found that explicity resolving convection allowed production of stronger rainfall events and a more accurate diurnal cycle as the convection scheme did not allow CAPE to build to large enough values.

Energy Meteorology (ERE2.1/AS1.11)

  • Frederik Kurzrock discussed using WRF to forecast incoming shortwave radiation on La Réunion. Two different WRF experiments were conducted, one with an updating cycle and WRF-DA of satellite observations and another one which was simply downscaling GFS driving data. He used spatial averaged SW to compare forecast values to observations, citing Lara-Fanego. It was found that the setup with data assimilation did not produce an improvement in overall skill although it did reduce the occurrence of large errors. More frequent cycling might be tested to see if the DA setup can be improved.
  • Adrian Acevedo presented work relating to the use of WRF downscaling and PCA analysis for forecasting for renewable at a coastal location in northern Spain. He downscaled both CFS (reanalysis) and GFS (forecast) using WRF 9km – 3km – 600m. And compared the WRF-forecast to observations and also compared the WRF-forecast to the WRF-reanalysis, treating the WRF-reanalysis as truth. He also did some post-processing stuff with PCA but I didn’t follow.
  • Jan Wohland presented work on the use of century-long reanalyses for wind resource assessments, looking at 20CR (NOAA), ERA-20C and CERA-20C (both ECMWF). It was found that these ECMWF reanalyses have spurious trends caused by the assimilation of ocean winds which can be corrupted by changes to observing system. When the trend is removed, it reveals some significant low-frequency (multi-decadal) variability in wind speed and that the period since the late seventies (the satellite-era and starting point for most global reanalyses) is during a period of faster than average wind speeds. Therefore the use of this 30-40 year period for resource assessment might not be as representative as we think. Not relevant to my work at all but was a very interesting talk.

I also presented a poster on the influence of PBL schemes in the WRF model on wind speed forecast skill over Ireland, looking at both 10m from synop stations and mast observations from wind farms.