Display a shape file in Google Earth

Even if Google does not advertise, it should be known that Google Earth Pro license has become free since the beginning of 2015. You can

 

The option Set colour from field allows you to enter manually the colour to apply for each of the different values included in the Name field.

 

By default, the loaded layer is not activated. Check the adequate box to display the layer.

 

ArcHydro: determination of the watersheds of a territory (1)

We will use ArcHydro to calculate the different whatersheds of a given territory. It is not only important to know the different steps to achieve the desired result, but also the assumptions we will be using. Indeed, they are not privative of ArcHydro, however they are the only ones available.

In order to determine the watershed we must follow the following steps:

  1. Determination of the direction of the flow
  2. Determination of the network flow
  3. Segmentation of the network flow
  4. Determination of the watershed in each section of the network flow
  5. Fusion of the watershed for each element in order to create the watersheds of the expected size.

 

  1. Determination of the direction of the flow

 

Each cell of the DTM is surrounded by eight other cells.

 

 

ArcHydro establishes towards which cell the water is flowing to, from the central cell by calculating the slope between the central cell and the surrounding eight. It takes into consideration that the water flows to the cell with the steepest slope.

This hypothesis is one of the pillars of ArcHydro, but, by no means, universal. Other algorithms (not available through ArcGis) consider that, even if the larger flow is to the steeper cell, there will, anyway, a flow to all the cells steeper than the central one.

In order to follow the process, we will use the corrected DEM from previous articles articles   ( ArcHydro : 2- Preparing a corrected DEM for hydrology – Part 1   and  ArcHydro : Prepare a corrected DEM for hydrology – Part 2 ).

To calculate the flow direction clik on “Land preprocessing»->«   Flow direction   « 
 

 

 

The result shows the different direction of the flow calculated from the TAM

 

 

  1. Determination of the network flow

The second step consists in calculating how many cells flow upstream toward each of the cells of the DTM.

 

When no cell flows towards the considered cell, the value is zero. When several cells flow towards the considered cell, the value is equal to the sum of the flow values of these cells.

To calculate the direction of the flow, click on  »   Land Preprocessing   »->«   Flow accumulation   « 

The only field to fill are the raster input (Flow direction), and the name of the raster output.

The result is presented as follows:

  1. Segmentation of the network flow

 

Why the watershed flow is not an hydrographic watershed? The hydrographic watershed lists the rivers/streams. The qualification of river/stream essentially rests in the following criteria:

  • The presence and the permanence of a natural bed at the origin, distinguishing, therefore, a water course in a channel or ditch manmade.
  • The permanence of the flow most part of the year. As for our case study, we will not take into consideration those criteria. We will build a watershed based on the accumulation data. We will create a watershed based on an accumulation threshold, for example 150. This means that when the accumulation reaches 150 cells, we consider that it constitutes a section of our hydrographic watershed. This is completely independent of the fact that this specific cell is or not located in the “official” watercourses. It is evident that that from a certain threshold, it will, undoubtedly, included in the “official” watercourses. The threshold definition does not follow any specific rule. It depends on the objective of the study, the size of the area being studied, the ground type, in short, depends on a lot of parameters. Putting it this way does not help you in anyway. Then, how to find the adequate value?

Let’s say that by the end of the data processing, you will obtain a photo of your overall territory. You will choose a value in order to create your hydrographic watershed according the quality of your photo. It is the smallest size that will give you information. According to your objective, and being aware that when you increase the definition of your photo, the more complicated becomes the process, you will have to find a suitable number. It is not worthwhile to increase the definition if you are not aiming for a global view, and it will equally inconvenient to decrease a lot the definition is your objective is to study the details of your chosen area.

 

To calculate the network flow click « Terrain Preprocessing »-> « Stream definition ». The definition window opens:

 

 

You can return the number of storage cells, the average surface watershed from which we consider that there exists a real one. It is very convenient that you can return just one of the values, and ArcHydro automatically calculates the other. In our example, we return 150 cells, which corresponds to a catchment area of about 0.84 sq km.

 

 

 

The result is the following :

In the next article we will use this network  in order to calculate the watersheds.

 

 

 

 

 

 

 

 

 

The spatial-temporal cube of ArcGis : 2- Hot and Cold Spots

In the previous article (Le Cube spatio-temporel d’ArcGis : 1- découverte) we discussed how to create a spatial-temporal cube from a cloud of XYZ points.

  The points are grouped into  » boxes  » (bins) corresponding to 3D pixels.
We have discussed how to depict them in an ArcGis Pro 3D scene. In this article we will discuss how to analyze the temporal trends of the bins values, using the analysis of emerging hot spots.

 

The purpose of this tool is to identify the values grouping trends of the cube built from your data ,  the so-called « spatial aggregates  « .

For the description   of the mathematical method that  this research tool relies on   ( Getis -Ord Gi *) you can download télécharger ici l’article de Getis et Ords. Let’s just say here that the research method for the three axes is to determine wether the occurrence values of the points in your cube can be considered, statistically, as grouped (hot spots) or not grouped (cold spots).

 

The result of the method entails:

  • a so called p-value which is the   probability value for the spatial distribution of dots being due to hazard. If p-value is big, it means that it is very likely that the distribution of the 3D pixels around that point is simply the result of hazard. A low p-value, on the contrary, indicates that it is very likely that the grouping is not random, and therefore the result of a specific phenomenon.
  • a so called value z-score which is a standard deviation value. This is not an independent value of the p-value but a complementary value.

If you find a weak p-value you would think that there is a strong probability that the observed phenomenon observed be significant.

OK, but how strong? The z-score indicates you than if this value is more   than three standard deviations from the mean you have less than 1% chance of being mislead. If it is between 2 and 3, you will have 5% and if it is between 1 and 2 you will have 10%. So it is up to you depending of the subject and your knowledge, determine the acceptable risk level for your hypothesis.

We will keep working with the example of the previous article by using the cube. To perform the analysis; you launch the other command available in the toolbox ExplorationTools for spatial-temporal models, Analysis of emerging hot spots

The first three fields are simple, input cube, output layer and, for the moment, in Analysis variable you only have one choice as COUNT.

However, the following two fields can be confusing. Indeed, during the construction of the cube, in the previous step, there were two similar fields. These fields made it possible to group the points by 3D pixel. We had defined pixels of 40km in X and Y and 12 hours in Z (time).

But now this is not the matter. Right now you must indicate (or let the control calculate by default) what is what you consider as neighbour of a 3D pixel. Do you consider pixels located 200 km apart as neighbours or not? Do you consider pixels separated by two days as neighbours or not?

Depending on your answer (or lack of it) the results will not be the same.

If you do not fill in the fields, the calculation by default of XY neighbourhood is a little obscure. On the other hand, time value by default is 1.

 

Once the request is completed, you will find the new layer displayed in your 2D view with the following classification :

To find the desired results you have two groups:

  • a group of « hot spots », where something is happening
  • a group of « cold spots », where nothing is happening.
  • a third « group » that includes the 3D pixels which could not be classified either as hot or cold, the « No trend detected ».

 

Then, for each type (hot or cold) you‘ll find the 3D pixels classified according to:

New spot: the last 3D pixel of the temporal series appears as a hot or cold spot, while in all the rest of the temporal steps this location was not considered as either of them. These are the points to be monitored in the next temporal series to verify if they are truly significant points or not.

Consecutive Spot:A spot with a single uninterrupted run of statistically significant hot spot bins. Less than ninety percent of all bins are statistically significant hot spots.

Intensifying spot: More than 90% of time steps are significant and values increase as time increases. The last pixels are stronger than those of the beginning. The last time step is significant.

Persistent   Spot: more than 90% of time steps are significant but there is no increasing or decreasing trend in the temporal series.

Diminishing Spot: More than 90% of time steps are significant and values decrease with time. The last pixels are weaker than those of the beginning. The last time step is significant

Sporadic   Spot: less than 90% of time steps are significant. Significant time steps are inserted among not significant time steps. If the significant time steps are hot spots then there are no cold spots in the series, and vice versa.

 

 

 

Oscillating   Spot: less than 90% of time steps are significant. As in the previous category, non-significant time steps are intercalated among significant steps. However, in the significant time steps there are hot as well as cold spots.

Historical   Spot: The most recent period is not significant, but at least ninety percent of the time intervals have been hot or cold spots statistically significant.

These results apply to each temporal series of the cube, i.e. at each column of it. Therefore this representation is, inevitably in two dimensions.

On the other hand, each time series can be either significant or not, and with one, two or three standard deviations. So, we can visualize these results with the 3D visualization tool of the spatial-temporal cube (see previous article of this topic).

You must have a 3D window (Scene) in your project. If you do not, click

Insert ->-New Scene.

You must remove the reference to a layer of elevation. If you don’t, the cube columns will be displayed relative to the ground level and you will lose visibility of the correspondence of the different time steps.

In the geoprocessor window, launch the 3D display control.

 

 

 

 

 

 

Compared to the same order launched in previous article, we have made an analysis of the hot spots in the « Variable Display » window; in addition to COUNT we find and select « Hot and Cold Spot results”.

The result of the order is displayed:

And the window key let us see which are the displayed classes:

The hot spots in red, the cold spots in blue, and with different colour intensity that denotes the confidence of the results: one standard deviation = 90%, two standard deviations = 95%, three standard deviations = 99%.

With these two articles you have something to start trying the

spatial-temporal analysis. In a future article we will discuss the different pre-processing data techniques to be used with these two tools.