The classification itself is performed in Google's Earth Engine (EE, Gorelick et al. (2017) ) platform, as it hosts the 34 pre-processed earth observation (satellite) input features. The latter - combined with your training areas - are fed into a random forest classifier to map your area of interest into Local Climate Zones. The LCZ map sent via email is obtained using all training data, and is Gaussian-filtered, as done in Demuzere et al. (2020).
For the quality assessment - in line with Bechtel et al. (2019) the classification procedure is repeated 25 times, each time random sampling (stratified) 70 / 30% of the training samples for training / testing respectively. For more information also see: How is the accuracy of my LCZ map calculated?
The LCZ submission pages expects a training area
.kml/.kmz file that contains folders. Please
this training area template file.
However, other reasons are possible for your submission to fail:
Apparently your LCZ submission failed. Yet there is no reason to panic: this error email is also sent to the LCZ Generator developers, who will try to fix the problem, and reach out to you soon after.
According to Stewart and Oke (2012), Local Climate Zones are formally defined as "regions of uniform surface cover, structure, material, and human activity that span hundreds of meters to several kilometers in horizontal scale". This is sometimes referred to as a local (or neighbourhood) scale and lies between the climatological micro- and meso-scales (Oke et al., 2017).
This concept allows a certain range of appropriate scales, which implies different "valid" LCZ maps depending on the resolution. As there may not be one "optimal" scale for the LCZ classification, an approximate range can be specified: a resolution of 10–30 m is too high; equally a resolution of 500–1000 m is too low. Therefore, Bechtel et al. (2015) found that a resolution of 100 - 150 m to be a good compromise. In addition, a Gaussian post-classification filter is applied following Demuzere et al. (2020) reducing the thematic resolution of the LCZ zones, as single pixels do not constitute an LCZ.
The quality of the LCZ map is obtained via an automated cross-validation approach, that uses 25 bootstraps (Bechtel et al., 2019). For each bootstrap, 70% of the Training Area (TA) polygons are used to train and 30% to test; the polygons are selected by stratified (LCZ type) random sampling, maintaining the original LCZ class frequency distribution. This procedure is repeated 25 times allowing us to provide confidence intervals around the accuracy metrics.
The accuracy metrics used follow previous work (see Demuzere et al., 2020, and references therein): overall accuracy (OA), overall accuracy for the urban LCZ classes only (OAu ), overall accuracy of the built versus natural LCZ classes only (OAbu), a weighted accuracy (OAw), and the class-wise metric F1.
It is important to note that high overall accuracies do not automatically mean that the map is correct. This can be due to one or more of the following:
The produced LCZ map is representative for the years 2017-2019, because the pre-processed earth observation input features are composed out of images for these years. This is done in order to have a sufficient amount of earth observation images available, especially for areas with abundant cloud cover.
The automated TA quality control is performed in three steps, flagging the polygons:
For steps 2. and 3., all earth observation input feature are considered simultaneously. Note that this procedure is still experimental, so it is possible that some false positives are returned.
The 3-step TA quality control (see here) facilitates the revision of the original training area polygons, allowing you to revise the initial submission, and re-submit your revised training area file to the LCZ Generator. The number of required iterations is of course case-specific, yet previous work suggests to do at least three iterations (Verdonck et al., 2019).
As a rule-of-thumb, we suggest to have at least 5 - 15 polygons for those LCZ classes that are present. There will be variations between a LCZ type in different parts of the city (e.g. different roof colors or building materials) and in the same LCZ at different times of the year owing to vegetation status) so make sure to digitize examples.
More guidelines on how to digitize your training areas can be found here.
For the moment, this is not supported by the LCZ Generator. The earth observation input features are pre-processed globally and representative for the period 2017-2019 only. See here for more info. We might however be able to provide support for this offline, so please reach out to email@example.com.
For the moment, this is not supported by the LCZ Generator. The earth observation input features are pre-processed globally and representative for the period 2017-2019 only (see here). We might however be able to provide support for this offline, so please reach out to firstname.lastname@example.org.
By checking the first check-box on the submission page (I consent to show my name in the submission table and in the LCZ-Factsheet), your name will appear in the Author field of the submission table and factsheet.
In case more than one author has developed the training area files, please provide the names in the Remarks field of the submission page (firstname, lastname - if multiple, separate them with ; ). These co-authors will be listed in the remarks field of the factsheet as well.
The region of interest (ROI) is defined by the bounding box of all training area polygons, with an additional buffer of ~10 km in each direction.
The exact production time depends on the load of the system, the amount of training area polygons and the extent of your region of interest. On average, you should receive the results by e-mail in approximately 20 minutes.
Attribution guidelines are provided in the Attribution section of each factsheet. Not only for the tool itself, yet also for the authors who made the training area datasets, or for any of the procedures on which the LCZ Generator is based.