ENC maps in QGis with Postgis(1)

This subject comprises two articles. The first deals with importing ENC S57 files into a Postgresql/postgis database. The second deals with setting up an automatic symbology equivalent to nautical charts. You can download all the scripts in this article at https://www.nasca.ovh/downloads/fichiers_enc_postgis1.7z or go directly to GItHub: https://github.com/SigEtTerritoires/enc_postgis

We advise you to unzip all sql (.sql) and batch (.bat) files in a directory of your choice, bearing in mind that for their use you’ll need to change the directory used in the scripts to the name of your directory.
For symbology, as you’ll see later, you’ll have the choice of creating either a default directory :C:/nautical ((recommended), or a directory of your choice.

The S57 format and GIS

If you’re about to start working with S57 files in a GIS, there are a few things you need to be aware of to navigate without pitfalls.

First of all, the structure of S57 files does not correspond to the structures adopted in GIS.

In a GIS, you have a geographic object represented by a table with two types of information: the geometry of the object’s entities and the attributes of these entities.

If you have other objects with identical geometries, the geometric information is duplicated, once in each table.

For the S57 format, the main objective is to optimize information storage and therefore not to duplicate information. If an object has a point entity, a point will be created. If other objects have entities located on this point, we use the reference of the point that has already been created. In this way, a point is only described once in the file. The same applies to polylines and surfaces. An S57 file will therefore have a series of tables containing geometric information, called “primitives”:

  • IsolatedNode (points)
  • ConnectedNode (points)
  • Edge (polylines)
  • Face (polygons)

The attribute table for the various S57 objects contains only the object attributes.

Geometry types

S57 “layers” are object classes. For example, the various land zones are encoded in the LNDARE object class.

The definition of this object class is :

Geo object: Land area (LNDARE) (P,L,A)
Attributes: CONDTN OBJNAM NOBJNM STATUS INFORM NINFOM

While all LNDARE objects have the same attributes, the same cannot be said of the geometry type. The information (P, L, A) indicates that points, lines and polygons can be found in this object class. Unlike GIS standards, all three geometry types coexist within the same object class.

To switch from the S57 format to a Postgresql/Postgis database, we’ll be using the GDAL library, included with QGis.

Operations with GDAL will be carried out from the command line in an OSGeo4W window.

The basic syntax for processing an S57 file (extension .000) and importing it into a Postgtresql/Postgis database is as follows:

ogr2ogr -f PostgreSQL PG: “dbname=‘postgis_34_sample’ host=‘localhost’ port=‘5434’ user=‘postgres’ password=‘psw’ active_schema=‘schema’” .000 file

We’ll look at the various options to be used and their function, but for now we’ll add the -skipfailures -append -update options.

ogr2ogr -skipfailures -append –update  -f PostgreSQL PG:"dbname='postgis_34_sample' host='localhost' port='5434' user='postgres' password='psw' active_schema='schema'" fichier.000

The first allows processing to continue even if errors are detected. The next two allow us not to overwrite the output postgresql tables if they exist, but to update them by adding data from the input S57 file.

Whichever format is chosen for integration into QGis, we will need to create a layer for each type of geometry. If we don’t, GDAL will create the layer type according to the first entity found during conversion. If it’s a point type, the layer created will be a point type, and the line and polygon entities will be ignored. Similarly, if the first entity is of line type, points and polygons will be ignored.

It should also be noted that the S57 format has no constraints on object classes: if there is no object class in the area covered by the S57 file, there will be nothing concerning it in the file (no empty layer). Similarly, if only one type of geometry is present, even though all three types are possible, there will be no trace of the other types of geometry.

The processing of an S57 file with ogr2ogr must therefore be broken down into three steps, one for each type of geometry. The following options allow you to process each S57 object class by selecting only one type of geometry:

-where « OGR_GEOMETRY=’POINT’ or OGR_GEOMETRY=’MULTIPOINT’ »

-where « OGR_GEOMETRY=’LINESTRING’ or OGR_GEOMETRY=’MULTILINESTRING’ »

-where « OGR_GEOMETRY=’POLYGON’ or OGR_GEOMETRY=’MULTIPOLYGON’ »

For certain types of driver, GDAL allows you to create prefixes in output tables. In this case, you could create all tables (points, lines, polygons) in a single schema, prefixing them with pt_, li_ and pl_, for example. The problem is that GDAL’s S57 driver does not allow this option. We then need to create three separate schemas, in which tables will be created according to the type of geometry. Each schema will then contain a table with the same name but a different geometry.

Here, we’ll use three import schemas for ogr2ogr commands, into which we’ll import tables from an S57 file. We’ll call them pointsenc, linesenc and polysenc. On the other hand, we’ll create a schema called ENC for the complete database. For each ENC map, we’ll import its contents into the three import schemas with ogr2ogr, then execute SQL queries to update the ENC schema database with the new imported tables. Whether you have a single S57 file to load or a batch of S57 files to load simultaneously, the process is as follows:

  1. Load Point object classes into pointsENC schema with ogr2ogr
  2. Load Lines object classes into LinesENC schema with ogr2ogr
  3. Load Polygon object classes into PolysENC schema with ogr2ogr
  4. Remove empty tables from all three schemas
  5. Update existing tables in ENC schema from three import schemas
  6. Cloning of tables from the three import schemas that were not present in the ENC schema,
  7. Deletion of tables from the three import schemas.

Processing bathymetric probes

Bathymetric probes pose several problems during format conversion. The first is that the depth value is not contained in an attribute, but as Z in the geometry (XYZ). The second is that probes are not of the Point type, but of the Multipoint type. To obtain probe values directly in an attribute field, you need to add two parameters to the ogr2ogr command line:

-oo SPLIT_MULTIPOINT=ON -oo ADD_SOUNDG_DEPTH=ON

The first converts multipoint entities into individual points, and the second adds a “DEPTH” attribute field to the output table with the geometry’s Z value.

Retrieving “parent” identifiers

Some objects can be grouped by a non-geographic parent object. For example, fire sectors are coded with one sector per record in the “LIGHTS” table. The different sectors of the same fire have no particular attribute that could indicate that they correspond to the same entity. This information is contained in a “parent” record. To retrieve this identifier as an attribute from the LIGTHS table, we need to add an option to the command line:

-oo RETURN_LINKAGES=O

This option creates a name_rcid attribute that is common to all sectors of the same fire, but also creates a series of fields (name_rcnm,ORNT,USAG,MASK).

“List” type processing

In addition to the classic field types (integer, real, text), the S57 format uses a special type: lists of character strings or integers. These field types are found in PostgreSQL, but not in shapefiles and geopackages. It is therefore not necessary to transform S57 lists into character strings, as is the case for these formats. In fact, it is strongly discouraged, as it would complicate the processing of list attributes.

Managing duplicate information

When you load several ENC maps into the same database, you may be confronted with duplicate information when two or more maps overlap.

These duplicates can be of two different types:

True” duplicates, where all the attributes are the same. These are rare, as they result from the superimposition of two chart overlays of very similar scale. On the other hand, they can also result from an error in reloading a duplicate S57 file. These duplicates can be deleted.

False” duplicates, where layer information is duplicated without the record identifiers necessarily being identical. These occur when areas mapped at different scales are loaded into the same database. Also, depending on the scale, the same information (e.g. an obstruction) may have a different geometry (point on one map, area on another). This type of duplication should not be deleted, but managed.

In order to manage these duplicates, as well as the correct display of information, it is necessary to add map attributes to the layers. In fact, once integrated into a layer, entities from different files have no attributes that can be traced back to their source.

We have included the necessary code in the SQL import procedure to add the file name, data compilation scale and chart purpose to all tables in the schema.

As far as the name is concerned, it may be useful in later database maintenance to be able to select the entities corresponding to an S57 source file.

While the name of the map is of relative use, the same cannot be said of the scale, as this is an indispensable criterion in the search for duplicates. Indeed, when mixing entities from several S57 files, data from different scales will coexist more or less happily.

In addition to the scale, there’s another piece of information that can be very useful: the purpose of the map. This is a value between 1 and 6, corresponding to the main objective of the map:

  • 1: Overview
  • 2: General
  • 3: Coastal
  • 4 : Approach
  • 5: Port
  • 6 : Docking

In the PostgreSQL/Postgis project database, we have imported 4500 S57 files.
The min and max values for the scale of all maps for objective 5 are 3000 and 60000. The min and max scale values for objective 6 are 2500 and 15000. We can see that the scale values of the most detailed maps are found within the type 5 maps.

Here’s the result for all the goals:

Table of goals and corresponding scales
Purpose min_scale max_scale
1 325000 10000000
2 100000 1534076
3 50000 600000
4 12500 150000
5 3000 60000
6 2500 15000

The DSID table for S57 files contains two attributes: the S57 file name and the data compilation scale, as well as the ‘purpose’ attribute. This attribute is found in the same DSID table used to retrieve the scale with the name DSID_INTU.

As ogr2ogr commands only load spatial tables by default, we’ll need to run a special command requesting the loading of the DSID table:

ogr2ogr -skipfailures -append -update -f PostgreSQL PG: “dbname… “DSID

Importing “primitives”

An S57 file therefore has a series of tables containing geometric information, called “primitives”:

  • IsolatedNode (points)
  • ConnectedNode (points)
  • Edge (polylines)
  • Face (polygons)

The attribute table for the various S57 objects contains only the object attributes.

What complicates the task is that there are two attributes that refer to the geometries: posacc (the estimated precision of the position, a quantitative value) and quapos (quality of the position, a qualitative variable).

These two attributes can be found in the primitive tables.

To switch from the S57 structure to a GIS structure (shapefile, geopackage, postgis), we use the GDAL library and its ogr2ogr command.

This command creates GIS tables from the S57 structure, creating one table per S57 object, assigning the corresponding geometries from the primitive tables to each entity and adding the S57 object attributes to each entity. The trace of the primitives used for the geometry of each entity can be found in the NAME_RCID field of the GIS tables, provided the options -oo “RETURN_LINKAGES=ON” -oo “LNAM_REFS=ON” have been added to the ogr2ogr command line.

The following figure shows a point-type layer. The value indicated in the NAME_RCID field is the RCID of the point used in the IsolatedNode table.

table de type point

The following figure shows an example of a linear layer. The values indicated in the NAME_RCID field are the RCID values of the polylines used in the Edge table.

table de type polyligne

The following figure shows an example of a polygon layer. The values indicated in the NAME_RCID field are those of the polylines used in the Edge table.

table de type polygone

In order to retrieve the QUAPOS and POSACC attributes of each entity in the point-type tables, we need to retrieve the values of the IsolatedNode point and assign them to the tables of the various ENC objects.

If the identifiers were directly the RCIDs in the ENC tables, we could make a join between each table (soundg, obstrn,…) and IsolatedNode. But as you can see in the previous images, the NAME_RCID attribute is of stringlist type, which blocks this solution. We’ve therefore developed a SQL query that does the job when loading data from the import schemas into the ENC schema.

PostgreSQL schemas

To create import tables in a specific schema, you need to use ogr2ogr’s active_schema option. With this option, PostgreSQL allows you to create tables with the same name in several different schemas. The only exception is the Public schema. If a table name is used in this schema, the active_schema option is ignored. You must therefore be careful not to create S57 tables in the Public schema. If you do, you’ll have to delete them manually, one by one.

Creation with a SQL query

The following query creates the four schemas required:

createschemas.sql

CREATE SCHEMA IF NOT EXISTS encm
AUTHORIZATION pg_database_owner;
GRANT ALL ON SCHEMA encm TO pg_database_owner;
CREATE SCHEMA IF NOT EXISTS pointsenc
AUTHORIZATION pg_database_owner;
GRANT ALL ON SCHEMA pointsenc TO pg_database_owner;
CREATE SCHEMA IF NOT EXISTS linesenc
AUTHORIZATION pg_database_owner;
GRANT ALL ON SCHEMA linesenc TO pg_database_owner;
CREATE SCHEMA IF NOT EXISTS polysenc
AUTHORIZATION pg_database_owner;
GRANT ALL ON SCHEMA polysenc TO pg_database_owner;

Creating with pgAdmin

To create schemas, open pgAdmin4, and on Schemas open the context menu->Create-Schema

ctreate schema pgadmin4

Enter the desired name for the schema:

create schema general

Repeat this operation to create the four schemas required for our ENC database.

postgresql schemas

ATTENTION: The two creation methods do not produce the same result. When using pgAdmin, schema names are case-sensitive and enclosed in quotation marks (“PointsENC”, for example). When using SQL, names will always be in lower case and there will be no quotation marks (pointsenc, for example). To avoid this, don’t use upper case in pgAdmin.

In the remainder of this article, we’ll use the names derived from the SQL query. If you choose to use capital letters, you’ll need to modify the schema names in the code provided.

Preparing import schemas

The table import phase with ogr2ogr is not limited to simply executing the S57->Postgresql transcoding command.

Once the schema has been populated with the class tables from the S57 file, we’ll add the DSID table, which will be used to retrieve the name of the S57 file, the data compilation scale and the purpose of the map.

Before doing so, we need to add these attributes to all non-empty tables resulting from the ogr2ogr command, taking the opportunity to also create the QUAPOS and POSACC attributes which will be filled in later.

The principle of this step is as follows:

  1. In the batch file (.bat), execute the command to load a geometry type.
  2. Once this line of code has been executed, the DSID table is loaded in the same schema.
  3. An AFTER INSERT trigger on the schema’s dsid table first deletes empty tables, then creates the enc_chart, scale, purpose, posacc and quapos attributes in all tables. It then adds the values of enc_chart, scale and purpose to all table records.

This procedure is performed for each type of geometry: point, line and polygon.

For this procedure to work, we need to set up a few elements in the Postgresql database.

Import schema DSID tables

Since we’ll be using a trigger on these tables, they must be created before the first load. So, in each import schema, we’ll create a table

  • Pointsdsid in the pointsenc schema
  • Linesdsid in the linesenc schema
  • Polysdsid in the polysenc schema

create_DSID_tables.sql

-- Création de la séquence pointsdsid_ogc_fid_seq
CREATE SEQUENCE IF NOT EXISTS pointsenc.pointsdsid_ogc_fid_seq
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;

-- Création de la séquence polysdsid_ogc_fid_seq

CREATE SEQUENCE IF NOT EXISTS polysenc.polysdsid_ogc_fid_seq
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;

    -- Création de la séquence linesdsid_ogc_fid_seq

CREATE SEQUENCE IF NOT EXISTS linesenc.linesdsid_ogc_fid_seq
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
-- Table: pointsenc.pointsdsid

-- DROP TABLE IF EXISTS pointsenc.pointsdsid;

CREATE TABLE IF NOT EXISTS pointsenc.pointsdsid
(
ogc_fid integer NOT NULL DEFAULT nextval('pointsenc.pointsdsid_ogc_fid_seq'::regclass),
dsid_expp numeric(3,0),
dsid_intu numeric(3,0),
dsid_dsnm character varying COLLATE pg_catalog."default",
dsid_edtn character varying COLLATE pg_catalog."default",
dsid_updn character varying COLLATE pg_catalog."default",
dsid_uadt character varying(8) COLLATE pg_catalog."default",
dsid_isdt character varying(8) COLLATE pg_catalog."default",
dsid_sted numeric(11,6),
dsid_prsp numeric(3,0),
dsid_psdn character varying COLLATE pg_catalog."default",
dsid_pred character varying COLLATE pg_catalog."default",
dsid_prof numeric(3,0),
dsid_agen numeric(5,0),
dsid_comt character varying COLLATE pg_catalog."default",
dssi_dstr numeric(3,0),
dssi_aall numeric(3,0),
dssi_nall numeric(3,0),
dssi_nomr numeric(10,0),
dssi_nocr numeric(10,0),
dssi_nogr numeric(10,0),
dssi_nolr numeric(10,0),
dssi_noin numeric(10,0),
dssi_nocn numeric(10,0),
dssi_noed numeric(10,0),
dssi_nofa numeric(10,0),
dspm_hdat numeric(3,0),
dspm_vdat numeric(3,0),
dspm_sdat numeric(3,0),
dspm_cscl numeric(10,0),
dspm_duni numeric(3,0),
dspm_huni numeric(3,0),
dspm_puni numeric(3,0),
dspm_coun numeric(3,0),
dspm_comf numeric(10,0),
dspm_somf numeric(10,0),
dspm_comt character varying COLLATE pg_catalog."default",
CONSTRAINT pointsdsid_pkey PRIMARY KEY (ogc_fid)
)

TABLESPACE pg_default;

ALTER TABLE IF EXISTS pointsenc.pointsdsid
OWNER to postgres;
-- Table: polysenc.polysdsid

-- DROP TABLE IF EXISTS polysenc.polysdsid;

CREATE TABLE IF NOT EXISTS polysenc.polysdsid
(
ogc_fid integer NOT NULL DEFAULT nextval('polysenc.polysdsid_ogc_fid_seq'::regclass),
dsid_expp numeric(3,0),
dsid_intu numeric(3,0),
dsid_dsnm character varying COLLATE pg_catalog."default",
dsid_edtn character varying COLLATE pg_catalog."default",
dsid_updn character varying COLLATE pg_catalog."default",
dsid_uadt character varying(8) COLLATE pg_catalog."default",
dsid_isdt character varying(8) COLLATE pg_catalog."default",
dsid_sted numeric(11,6),
dsid_prsp numeric(3,0),
dsid_psdn character varying COLLATE pg_catalog."default",
dsid_pred character varying COLLATE pg_catalog."default",
dsid_prof numeric(3,0),
dsid_agen numeric(5,0),
dsid_comt character varying COLLATE pg_catalog."default",
dssi_dstr numeric(3,0),
dssi_aall numeric(3,0),
dssi_nall numeric(3,0),
dssi_nomr numeric(10,0),
dssi_nocr numeric(10,0),
dssi_nogr numeric(10,0),
dssi_nolr numeric(10,0),
dssi_noin numeric(10,0),
dssi_nocn numeric(10,0),
dssi_noed numeric(10,0),
dssi_nofa numeric(10,0),
dspm_hdat numeric(3,0),
dspm_vdat numeric(3,0),
dspm_sdat numeric(3,0),
dspm_cscl numeric(10,0),
dspm_duni numeric(3,0),
dspm_huni numeric(3,0),
dspm_puni numeric(3,0),
dspm_coun numeric(3,0),
dspm_comf numeric(10,0),
dspm_somf numeric(10,0),
dspm_comt character varying COLLATE pg_catalog."default",
CONSTRAINT polysdsid_pkey PRIMARY KEY (ogc_fid)
)

TABLESPACE pg_default;

ALTER TABLE IF EXISTS polysenc.polysdsid
OWNER to postgres;
-- Table: linesenc.linesdsid

-- DROP TABLE IF EXISTS linesenc.linesdsid;

CREATE TABLE IF NOT EXISTS linesenc.linesdsid
(
ogc_fid integer NOT NULL DEFAULT nextval('linesenc.linesdsid_ogc_fid_seq'::regclass),
dsid_expp numeric(3,0),
dsid_intu numeric(3,0),
dsid_dsnm character varying COLLATE pg_catalog."default",
dsid_edtn character varying COLLATE pg_catalog."default",
dsid_updn character varying COLLATE pg_catalog."default",
dsid_uadt character varying(8) COLLATE pg_catalog."default",
dsid_isdt character varying(8) COLLATE pg_catalog."default",
dsid_sted numeric(11,6),
dsid_prsp numeric(3,0),
dsid_psdn character varying COLLATE pg_catalog."default",
dsid_pred character varying COLLATE pg_catalog."default",
dsid_prof numeric(3,0),
dsid_agen numeric(5,0),
dsid_comt character varying COLLATE pg_catalog."default",
dssi_dstr numeric(3,0),
dssi_aall numeric(3,0),
dssi_nall numeric(3,0),
dssi_nomr numeric(10,0),
dssi_nocr numeric(10,0),
dssi_nogr numeric(10,0),
dssi_nolr numeric(10,0),
dssi_noin numeric(10,0),
dssi_nocn numeric(10,0),
dssi_noed numeric(10,0),
dssi_nofa numeric(10,0),
dspm_hdat numeric(3,0),
dspm_vdat numeric(3,0),
dspm_sdat numeric(3,0),
dspm_cscl numeric(10,0),
dspm_duni numeric(3,0),
dspm_huni numeric(3,0),
dspm_puni numeric(3,0),
dspm_coun numeric(3,0),
dspm_comf numeric(10,0),
dspm_somf numeric(10,0),
dspm_comt character varying COLLATE pg_catalog."default",
CONSTRAINT linesdsid_pkey PRIMARY KEY (ogc_fid)
)

TABLESPACE pg_default;

ALTER TABLE IF EXISTS linesenc.linesdsid
OWNER to postgres;

Setting up AFTER INSERT triggers

After inserting a row in the schema’s DSID table, these triggers will

  • delete empty tables in the schema,
  • check whether the additional fields already exist, and if not, create the 5 attributes ‘(enc_chart, scale, purpose,posacc and quapos),
  • add the values of the DSID record to the enc_chart, scale and purpose attributes,
  • and finally delete the record from the DSID table.

There is one function per geometry type,

  • create_fields_and_update_values_pointsenc()
  • create_fields_and_update_values_linesenc()
  • create_fields_and_update_values_polysenc()

then a SQL query to set up triggers on each DSID table (pointsDSID,linesDSID,polysDSID)

create_fields_and_update_values_pointsenc()

CREATE OR REPLACE FUNCTION create_fields_and_update_values_pointsenc()

RETURNS TRIGGER AS
$$
DECLARE
table_record RECORD;
tables_import RECORD;
enc_chart_value TEXT;
scale_value NUMERIC;
purpose_value NUMERIC;
empty_tables int;
BEGIN
--Le code suivant permet de supprimer les tables vides d'un schéma d'import:
FOR table_record IN SELECT table_name FROM information_schema.tables WHERE table_schema = 'pointsenc' AND table_name != 'pointsdsid' LOOP
-- Composer une requête dynamique pour vérifier si la table est vide
EXECUTE format('SELECT COUNT(*) FROM pointsenc.%I', table_record.table_name) INTO empty_tables;

-- Si le nombre de lignes est égal à zéro, supprimer la table
    IF empty_tables = 0 THEN
        EXECUTE format('DROP TABLE IF EXISTS pointsenc.%I CASCADE',  table_record.table_name);
        RAISE NOTICE 'Table pointsenc.%I supprimée car elle est vide.',  table_record.table_name;
    END IF;
END LOOP;

-- Parcours de toutes les tables du schéma pointsenc 
FOR table_record IN SELECT table_name FROM information_schema.tables WHERE table_schema = 'pointsenc' AND table_name != 'pointsdsid' LOOP
    -- Vérifie si les champs enc_chart et scale n'existent pas dans la table actuelle
    IF NOT EXISTS (
        SELECT column_name FROM information_schema.columns
        WHERE table_schema = 'pointsenc' AND table_name = table_record.table_name AND column_name IN ('enc_chart', 'scale','purpose')
    ) THEN
        -- Crée le champ enc_chart de type texte
        EXECUTE format('ALTER TABLE pointsenc.%I ADD COLUMN enc_chart TEXT', table_record.table_name);
        -- Crée le champ scale de type numérique
        EXECUTE format('ALTER TABLE pointsenc.%I ADD COLUMN scale NUMERIC', table_record.table_name);
        -- Crée le champ purpose de type numérique
        EXECUTE format('ALTER TABLE pointsenc.%I ADD COLUMN purpose NUMERIC', table_record.table_name);
            IF NOT EXISTS (
                SELECT column_name FROM information_schema.columns
                WHERE table_schema = 'pointsenc' AND table_name = table_record.table_name AND column_name IN ('posacc','quapos')
                ) THEN
                -- Crée le champ POSACC de type numérique
                EXECUTE format('ALTER TABLE pointsenc.%I ADD COLUMN posacc NUMERIC(10,0)', table_record.table_name);
                -- Crée le champ QUAPOS de type numérique
                EXECUTE format('ALTER TABLE pointsenc.%I ADD COLUMN quapos INTEGER', table_record.table_name);
            END IF;
        RAISE NOTICE 'Champs enc_chart, scale , purpose, POSACC et QUIAPOS créés dans la table %', table_record.table_name;

    END IF;
        -- Obtient la valeur de enc_chart à partir de la table DSID pour DSID_DSNM
        SELECT DSID_DSNM INTO enc_chart_value FROM pointsdsid LIMIT 1;
        -- Obtient la valeur de scale à partir de la table DSID pour DSPM_CSCL
        SELECT DSPM_CSCL INTO scale_value FROM pointsdsid LIMIT 1;
        -- Obtient la valeur de purpose à partir de la table DSID pour DSID_INTU
        SELECT DSID_INTU INTO purpose_value FROM pointsdsid LIMIT 1;

        -- Met à jour les enregistrements avec les valeurs trouvées dans la table DSID
    EXECUTE format('UPDATE pointsenc.%I SET enc_chart = $1 WHERE enc_chart IS NULL', table_record.table_name) USING enc_chart_value;
    EXECUTE format('UPDATE pointsenc.%I SET scale = $1 WHERE scale IS NULL', table_record.table_name) USING scale_value;
    EXECUTE format('UPDATE pointsenc.%I SET purpose = $1 WHERE purpose IS NULL', table_record.table_name) USING purpose_value;


END LOOP;

-- Efface l'enregistrement de la table DSID
DELETE FROM pointsenc.pointsdsid;

RETURN NULL;

END;
$$
LANGUAGE plpgsql;

create_fields_and_update_values_linesenc()



CREATE OR REPLACE FUNCTION create_fields_and_update_values_linesenc()
RETURNS TRIGGER AS
$$
DECLARE
table_record RECORD;
tables_import RECORD;
enc_chart_value TEXT;
scale_value NUMERIC;
purpose_value NUMERIC;
empty_tables int;
BEGIN
--Le code suivant permet de supprimer les tables vides d'un schéma d'import:
FOR table_record IN SELECT table_name FROM information_schema.tables WHERE table_schema = 'linesenc' AND table_name != 'linesdsid' LOOP
-- Composer une requête dynamique pour vérifier si la table est vide
EXECUTE format('SELECT COUNT(*) FROM linesenc.%I', table_record.table_name) INTO empty_tables;

-- Si le nombre de lignes est égal à zéro, supprimer la table
    IF empty_tables = 0 THEN
        EXECUTE format('DROP TABLE IF EXISTS linesenc.%I CASCADE',  table_record.table_name);
        RAISE NOTICE 'Table linesenc.%I supprimée car elle est vide.',  table_record.table_name;
    END IF;
END LOOP;

-- Parcours de toutes les tables du schéma linesenc 
FOR table_record IN SELECT table_name FROM information_schema.tables WHERE table_schema = 'linesenc' AND table_name != 'linesdsid' LOOP
    -- Vérifie si les champs enc_chart et scale n'existent pas dans la table actuelle
    IF NOT EXISTS (
        SELECT column_name FROM information_schema.columns
        WHERE table_schema = 'linesenc' AND table_name = table_record.table_name AND column_name IN ('enc_chart', 'scale','purpose')
    ) THEN
        -- Crée le champ enc_chart de type texte
        EXECUTE format('ALTER TABLE linesenc.%I ADD COLUMN enc_chart TEXT', table_record.table_name);
        -- Crée le champ scale de type numérique
        EXECUTE format('ALTER TABLE linesenc.%I ADD COLUMN scale NUMERIC', table_record.table_name);
        -- Crée le champ purpose de type numérique
        EXECUTE format('ALTER TABLE linesenc.%I ADD COLUMN purpose NUMERIC', table_record.table_name);
        RAISE NOTICE 'Champs enc_chart, scale et purpose créés dans la table %', table_record.table_name;
            IF NOT EXISTS (
            SELECT column_name FROM information_schema.columns
            WHERE table_schema = 'linesenc' AND table_name = table_record.table_name AND column_name IN ('posacc','quapos')
            ) THEN
            -- Crée le champ POSACC de type numérique
            EXECUTE format('ALTER TABLE linesenc.%I ADD COLUMN POSACC NUMERIC(10,0)', table_record.table_name);
            -- Crée le champ QUAPOS de type numérique
            EXECUTE format('ALTER TABLE linesenc.%I ADD COLUMN QUAPOS INTEGER', table_record.table_name);
        END IF;
    RAISE NOTICE 'Champs enc_chart, scale et purpose créés dans la table %', table_record.table_name;

    END IF;
        -- Obtient la valeur de enc_chart à partir de la table DSID pour DSID_DSNM
        SELECT DSID_DSNM INTO enc_chart_value FROM linesenc.linesdsid LIMIT 1;
        -- Obtient la valeur de scale à partir de la table DSID pour DSPM_CSCL
        SELECT DSPM_CSCL INTO scale_value FROM linesenc.linesdsid LIMIT 1;
        -- Obtient la valeur de purpose à partir de la table DSID pour DSID_INTU
        SELECT DSID_INTU INTO purpose_value FROM linesenc.linesdsid LIMIT 1;

        -- Met à jour les enregistrements avec les valeurs trouvées dans la table DSID
    EXECUTE format('UPDATE linesenc.%I SET enc_chart = $1 WHERE enc_chart IS NULL', table_record.table_name) USING enc_chart_value;
    EXECUTE format('UPDATE linesenc.%I SET scale = $1 WHERE scale IS NULL', table_record.table_name) USING scale_value;
    EXECUTE format('UPDATE linesenc.%I SET purpose = $1 WHERE purpose IS NULL', table_record.table_name) USING purpose_value;
END LOOP;

-- Efface l'enregistrement de la table DSID
DELETE FROM linesenc.linesdsid;

RETURN NULL;

END;
$$
LANGUAGE plpgsql;

create_fields_and_update_values_polysenc()

CREATE OR REPLACE FUNCTION create_fields_and_update_values_polysenc()
RETURNS TRIGGER AS
$$
DECLARE
table_record RECORD;
tables_import RECORD;
enc_chart_value TEXT;
scale_value NUMERIC;
purpose_value NUMERIC;
empty_tables int;
BEGIN
--Le code suivant permet de supprimer les tables vides d'un schéma d'import:
FOR table_record IN SELECT table_name FROM information_schema.tables WHERE table_schema = 'polysenc' AND table_name != 'polysdsid' LOOP
-- Composer une requête dynamique pour vérifier si la table est vide
EXECUTE format('SELECT COUNT(*) FROM polysenc.%I', table_record.table_name) INTO empty_tables;

-- Si le nombre de lignes est égal à zéro, supprimer la table
    IF empty_tables = 0 THEN
        EXECUTE format('DROP TABLE IF EXISTS polysenc.%I CASCADE',  table_record.table_name);
        RAISE NOTICE 'Table polysenc.%I supprimée car elle est vide.',  table_record.table_name;
    END IF;
END LOOP;

-- Parcours de toutes les tables du schéma polysenc 
FOR table_record IN SELECT table_name FROM information_schema.tables WHERE table_schema = 'polysenc' AND table_name != 'polysdsid' LOOP
    -- Vérifie si les champs enc_chart et scale n'existent pas dans la table actuelle
    IF NOT EXISTS (
        SELECT column_name FROM information_schema.columns
        WHERE table_schema = 'polysenc' AND table_name = table_record.table_name AND column_name IN ('enc_chart', 'scale','purpose')
    ) THEN
        -- Crée le champ enc_chart de type texte
        EXECUTE format('ALTER TABLE polysenc.%I ADD COLUMN enc_chart TEXT', table_record.table_name);
        -- Crée le champ scale de type numérique
        EXECUTE format('ALTER TABLE polysenc.%I ADD COLUMN scale NUMERIC', table_record.table_name);
        -- Crée le champ purpose de type numérique
        EXECUTE format('ALTER TABLE polysenc.%I ADD COLUMN purpose NUMERIC', table_record.table_name);
                    IF NOT EXISTS (
                SELECT column_name FROM information_schema.columns
                WHERE table_schema = 'polysenc' AND table_name = table_record.table_name AND column_name IN ('posacc','quapos')
                ) THEN
                -- Crée le champ POSACC de type numérique
                EXECUTE format('ALTER TABLE polysenc.%I ADD COLUMN POSACC NUMERIC(10,0)', table_record.table_name);
                -- Crée le champ QUAPOS de type numérique
                EXECUTE format('ALTER TABLE polysenc.%I ADD COLUMN QUAPOS INTEGER', table_record.table_name);
            END IF;
        RAISE NOTICE 'Champs enc_chart, scale et purpose créés dans la table %', table_record.table_name;

    END IF;
        -- Obtient la valeur de enc_chart à partir de la table DSID pour DSID_DSNM
        SELECT DSID_DSNM INTO enc_chart_value FROM polysenc.polysdsid LIMIT 1;
        -- Obtient la valeur de scale à partir de la table DSID pour DSPM_CSCL
        SELECT DSPM_CSCL INTO scale_value FROM polysenc.polysdsid LIMIT 1;
        -- Obtient la valeur de purpose à partir de la table DSID pour DSID_INTU
        SELECT DSID_INTU INTO purpose_value FROM polysenc.polysdsid LIMIT 1;

        -- Met à jour les enregistrements avec les valeurs trouvées dans la table DSID
    EXECUTE format('UPDATE polysenc.%I SET enc_chart = $1 WHERE enc_chart IS NULL', table_record.table_name) USING enc_chart_value;
    EXECUTE format('UPDATE polysenc.%I SET scale = $1 WHERE scale IS NULL', table_record.table_name) USING scale_value;
    EXECUTE format('UPDATE polysenc.%I SET purpose = $1 WHERE purpose IS NULL', table_record.table_name) USING purpose_value;
END LOOP;

-- Efface l'enregistrement de la table DSID
DELETE FROM polysenc.polysdsid;

RETURN NULL;

END;
$$
LANGUAGE plpgsql;

createtriggersfields()

CREATE OR REPLACE TRIGGER check_update
AFTER INSERT ON linesenc.linesdsid
FOR EACH ROW
EXECUTE PROCEDURE create_fields_and_update_values_linesenc();

CREATE OR REPLACE TRIGGER check_update
AFTER INSERT ON polysenc.polysdsid
FOR EACH ROW
EXECUTE PROCEDURE create_fields_and_update_values_polysenc();

CREATE OR REPLACE TRIGGER check_update
AFTER INSERT ON pointsenc.pointsdsid
FOR EACH ROW
EXECUTE PROCEDURE create_fields_and_update_values_pointsenc();

Workflow for loading S57 files

ogr2ogr commands for creating Postgis tables

With all this in mind, here’s the .bat file with the ogr2ogr command lines to create the tables in the import schemas:

Commandes ogr2ogr



@echo off
setlocal enabledelayedexpansion

REM Vérifie qu’un répertoire a été fourni
if “%~1″==”” (
echo Usage: %0 directory000
exit /b 1
)

REM Récupère l’argument
set “directory=%~1”

REM Itère sur tous les fichiers .000 dans le répertoire
for /r “%directory%” %%i in (*.000) do (
echo Traitement du fichier: %%i

ogr2ogr -skipfailures -append -update -s_srs EPSG:4326 -t_srs EPSG:4326 ^
    -where "OGR_GEOMETRY='POINT' or OGR_GEOMETRY='MULTIPOINT'" ^
    -oo RETURN_PRIMITIVES=ON -oo SPLIT_MULTIPOINT=ON -oo RETURN_LINKAGES=ON -oo LNAM_REFS=ON -oo ADD_SOUNDG_DEPTH=ON ^
    -nlt MULTIPOINT -f PostgreSQL ^
    PG:"dbname=postgis_34_sample host=localhost port=5432 user=postgres password=1touria+ active_schema=pointsenc" %%i

ogr2ogr -skipfailures -append -update -nln pointsDSID -f PostgreSQL  PG:"dbname=postgis_34_sample host=localhost port=5432 user=postgres password=1touria+ active_schema=pointsenc" %%i DSID

ogr2ogr -skipfailures -append -update -s_srs EPSG:4326 -t_srs EPSG:4326 ^
    -where "OGR_GEOMETRY='LINESTRING' or OGR_GEOMETRY='MULTILINESTRING'" ^
    -oo RETURN_PRIMITIVES=ON -oo SPLIT_MULTIPOINT=ON -oo RETURN_LINKAGES=ON -oo LNAM_REFS=ON ^
    -f PostgreSQL PG:"dbname=postgis_34_sample host=localhost port=5432 user=postgres password=1touria+ active_schema=linesenc" %%i

ogr2ogr -skipfailures -append -update -nln linesDSID -f PostgreSQL ^
    PG:"dbname=postgis_34_sample host=localhost port=5432 user=postgres password=1touria+ active_schema=linesenc" %%i DSID

ogr2ogr -skipfailures -append -update -s_srs EPSG:4326 -t_srs EPSG:4326 ^
    -where "OGR_GEOMETRY='POLYGON' or OGR_GEOMETRY='MULTIPOLYGON'" ^
    -oo RETURN_PRIMITIVES=ON -oo SPLIT_MULTIPOINT=ON -oo RETURN_LINKAGES=ON -oo LNAM_REFS=ON ^
    -f PostgreSQL PG:"dbname=postgis_34_sample host=localhost port=5432 user=postgres password=1touria+ active_schema=polysenc" %%i

ogr2ogr -skipfailures -append -update -nln polysDSID -f PostgreSQL ^
    PG:"dbname=postgis_34_sample host=localhost port=5432 user=postgres password=1touria+ active_schema=polysenc" %%i DSID

)

echo Traitement terminé.
pause

You need to modify the connection information to your PostgreSQL/POstgis database:

PostgreSQL PG: “dbname=‘postgis_34_sample’ host=‘localhost’ port=‘5434’ user=‘postgres’ password=‘xxxxxx’

To execute these command lines, simply open the OSGeo4W shell window::

OSGeo4W shell

The Shell window opens

osgeo4W shell window

Enter the following command line:

.\Path/ mass_load_s57_postgis.bat repertoire_enc

Repertoire_enc is the directory containing the .000 files of the enc cards. All .000 files in this directory will be loaded into the import schematics.

The result will be a series of tables created in each import schema. On the other hand, some tables will be completely empty. Indeed, if the geometry type is possible for an object class, the table will be created, but if there is no occurrence in the processed S57 file, the table will have no record.

Loading the database from import schemas

The clone_tables_with_prefix function loads the ENC database with tables from the three import schemas.

  • Firstly, it deletes from each import schema the empty tables created by ogr2ogr in the event that an expected geometry type was not used in the .000 file.
  • Secondly, for each import schema, it checks whether the import schema table already exists in the ENC database. If it doesn’t, it creates the table with the prefix corresponding to the geometry type (pt_, li_, pl_).
  • The function adds records from the import table to the ENC table.
  • For point-type tables, it updates the table’s posacc and quapos attributes with values from the IsolatedNode table.
  • For line and polygon tables, it updates the table’s posacc and quapos attributes using values from the Edge table.
  • Finally, the function empties all the tables in the import schemas, leaving them ready for reloading with my ogr2ogr commands.

Before using this function, you need to install the delete_all_records_in_schema function:

Copy the query into a pgAdmin SQL window and run it.

delete_all_records_in_schema 

-- FUNCTION: public.delete_all_records_in_schema(text)

-- DROP FUNCTION IF EXISTS public.delete_all_records_in_schema(text);

CREATE OR REPLACE FUNCTION public.delete_all_records_in_schema(
schema_name text)
RETURNS void
LANGUAGE 'plpgsql'
COST 100
VOLATILE PARALLEL UNSAFE
AS $BODY$
DECLARE
table_record RECORD;
BEGIN
-- Récupérer toutes les tables du schéma spécifié
FOR table_record IN
SELECT table_name
FROM information_schema.tables
WHERE table_schema = schema_name AND table_type = 'BASE TABLE'
LOOP
-- Construction de la requête DELETE pour chaque table
EXECUTE format('DELETE FROM %I.%I', schema_name, table_record.table_name);
END LOOP;
END;
$BODY$;

ALTER FUNCTION public.delete_all_records_in_schema(text)
OWNER TO postgres;

The ENC database is updated by calling the command

SELECT clone_tables_with_prefix()

Don’t forget to replace the schema name (enc2) with the name of your enc schema.

clone_table_with_prefix

CREATE OR REPLACE FUNCTION clone_tables_with_prefix()
RETURNS void AS
$$
DECLARE
table_nom text;
BEGIN
-- Clonage des tables dans le schéma ENC et mise à jour des tables existantes

-- Boucle sur les tables du schéma pointsENC
FOR table_nom IN (SELECT table_name as table_nom FROM information_schema.tables WHERE table_schema = 'pointsenc' AND table_name NOT IN ('pointsdsid','isolatednode','connectednode'))
LOOP
    -- Construction de la requête dynamique pour créer ou mettre à jour la table dans ENC
    EXECUTE format('CREATE TABLE IF NOT EXISTS enc2.pt_%I AS SELECT * FROM pointsenc.%I', table_nom, table_nom);
    EXECUTE format('INSERT INTO enc2.pt_%I SELECT * FROM pointsenc.%I ON CONFLICT DO NOTHING', table_nom, table_nom);
    EXECUTE format('UPDATE enc2.pt_%I SET posacc = isolatednode.posacc,  quapos = isolatednode.quapos FROM pointsenc.IsolatedNode isolatednode WHERE enc2.pt_%I.NAME_RCID[1] = isolatednode.RCID   AND enc2.pt_%I.enc_chart = isolatednode.enc_chart;', table_nom, table_nom,table_nom);

END LOOP;

-- Boucle sur les tables du schéma LinesENC
FOR table_nom IN (SELECT table_name as table_nom FROM information_schema.tables WHERE table_schema = 'linesenc' AND table_name NOT IN ('linesdsid','edge'))
LOOP
    -- Construction de la requête dynamique pour créer ou mettre à jour la table dans ENC
    EXECUTE format('CREATE TABLE IF NOT EXISTS enc2.li_%I AS SELECT * FROM linesenc.%I', table_nom, table_nom);
    EXECUTE format('INSERT INTO enc2.li_%I SELECT * FROM linesenc.%I ON CONFLICT DO NOTHING', table_nom, table_nom);
    EXECUTE format('UPDATE enc2.li_%I SET posacc = edge.posacc,  quapos = edge.quapos FROM linesenc.edge edge WHERE enc2.li_%I.NAME_RCID[1] = edge.RCID   AND enc2.li_%I.enc_chart = edge.enc_chart;', table_nom, table_nom,table_nom);

END LOOP;

-- Boucle sur les tables du schéma PolysENC
FOR table_nom IN (SELECT table_name as table_nom FROM information_schema.tables WHERE table_schema = 'polysenc' AND table_name NOT IN ( 'polysdsid','m_qual','m_srel'))
LOOP
    -- Construction de la requête dynamique pour créer ou mettre à jour la table dans ENC
    EXECUTE format('CREATE TABLE IF NOT EXISTS enc2.pl_%I AS SELECT * FROM polysenc.%I', table_nom, table_nom);
    EXECUTE format('INSERT INTO enc2.pl_%I SELECT * FROM polysenc.%I ON CONFLICT DO NOTHING', table_nom, table_nom);
    EXECUTE format('UPDATE enc2.pl_%I SET posacc = edge.posacc,  quapos = edge.quapos FROM linesenc.edge edge WHERE enc2.pl_%I.NAME_RCID[1] = edge.RCID   AND enc2.pl_%I.enc_chart = edge.enc_chart;', table_nom, table_nom,table_nom);

END LOOP;
EXECUTE (SELECT delete_all_records_in_schema('pointsenc'));
EXECUTE (SELECT delete_all_records_in_schema('linesenc'));
EXECUTE (SELECT delete_all_records_in_schema('polysenc'));

EXECUTE (SELECT update_sbdare());
END;
$$ LANGUAGE plpgsql;

Special treatment of the pt_SBDARE table

After the first execution of the previous script, if the loaded files had values for the background natures, you should have a pt_sbdare table in the ENC schema. This is the only table that requires special processing for the default symbology we supply. In fact, processing the values of the two attributes required to create the background nature label (NATSUR and NATQUA) is very complex, due to the nature of the attributes (StringLists) and the countless possible combinations.

The solution applied here is as follows:

  • A natsurf table lists the possible value combinations of natqua and natsur tuples with the corresponding label.
  • An update_sbdare function processes the two stringlists to compose the value tuples and concatenates the labels from the natsurf table appropriately.
  • The function updates records in the pt_sbdare table whose ‘label’ value is null.

When import schemas are first loaded, the pt_sbdare table does not exist. Once created by an import, you can define a trigger that will automatically execute the function afterwards.

You must therefore :

  • Import the natsurf table into the ENC schema
  • Create the update_sbdare function
  • Execute the function manually for the first time

.

Importing the natsurf table

Load the following query into a pgAdmin SQL window and run it. It is assumed that you have created an ‘enc’ schema for your database. If not, edit the table, replacing ‘enc.natsurf’ with ‘your_schema.natsurf’.

natsurf.sql 

SET standard_conforming_strings = ON;
DROP TABLE IF EXISTS enc.natsurf CASCADE;
BEGIN;
CREATE TABLE enc.natsurf();
ALTER TABLE enc.natsurf ADD COLUMN "ogc_fid" SERIAL CONSTRAINT "natsurf_pk" PRIMARY KEY;
ALTER TABLE enc.natsurf ADD COLUMN "fid" NUMERIC(20,0);
ALTER TABLE enc.natsurf ADD COLUMN "natsurt" VARCHAR;
ALTER TABLE enc.natsurf ADD COLUMN "natquat" VARCHAR;
ALTER TABLE enc.natsurf ADD COLUMN "etiq" VARCHAR;
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (1, '1', '0', 'M');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (2, '1', '1', 'fM');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (3, '1', '2', 'mM');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (4, '1', '3', 'cM');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (5, '1', '4', 'bkM');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (6, '1', '5', 'syM');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (7, '1', '6', 'soM');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (8, '1', '7', 'sfM');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (9, '1', '8', 'vM');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (10, '1', '9', 'caM');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (11, '1', '10', 'hM');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (12, '2', '0', 'Cy');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (13, '2', '1', 'fCy');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (14, '2', '2', 'mCy');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (15, '2', '3', 'cCy');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (16, '2', '4', 'bkCy');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (17, '2', '5', 'syCy');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (18, '2', '6', 'soCy');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (19, '2', '7', 'sfCy');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (20, '2', '8', 'vCy');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (21, '2', '9', 'caCy');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (22, '2', '10', 'hCy');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (23, '3', '0', 'Si');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (24, '3', '1', 'fSi');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (25, '3', '2', 'mSi');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (26, '3', '3', 'cSi');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (27, '3', '4', 'bkSi');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (28, '3', '5', 'sySi');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (29, '3', '6', 'soSi');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (30, '3', '7', 'sfSi');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (31, '3', '8', 'vSi');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (32, '3', '9', 'caSi');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (33, '3', '10', 'hSi');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (34, '4', '0', 'S');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (35, '4', '1', 'fS');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (36, '4', '2', 'mS');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (37, '4', '3', 'cS');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (38, '4', '4', 'bkS');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (39, '4', '5', 'syS');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (40, '4', '6', 'soS');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (41, '4', '7', 'sfS');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (42, '4', '8', 'vS');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (43, '4', '9', 'caS');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (44, '4', '10', 'hS');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (45, '5', '0', 'St');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (46, '5', '1', 'fSt');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (47, '5', '2', 'mSt');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (48, '5', '3', 'cSt');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (49, '5', '4', 'bkSt');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (50, '5', '5', 'sySt');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (51, '5', '6', 'soSt');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (52, '5', '7', 'sfSt');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (53, '5', '8', 'vSt');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (54, '5', '9', 'caSt');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (55, '5', '10', 'hSt');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (56, '6', '0', 'G');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (57, '6', '1', 'fG');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (58, '6', '2', 'mG');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (59, '6', '3', 'cG');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (60, '6', '4', 'bkG');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (61, '6', '5', 'syG');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (62, '6', '6', 'soG');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (63, '6', '7', 'sfG');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (64, '6', '8', 'vG');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (65, '6', '9', 'caG');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (66, '6', '10', 'hG');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (67, '7', '0', 'P');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (68, '7', '1', 'fP');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (69, '7', '2', 'mP');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (70, '7', '3', 'cP');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (71, '7', '4', 'bkP');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (72, '7', '5', 'syP');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (73, '7', '6', 'soP');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (74, '7', '7', 'sfP');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (75, '7', '8', 'vP');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (76, '7', '9', 'caP');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (77, '7', '10', 'hP');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (78, '8', '0', 'Cb');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (79, '8', '1', 'fCb');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (80, '8', '2', 'mCb');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (81, '8', '3', 'cCb');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (82, '8', '4', 'bkCb');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (83, '8', '5', 'syCb');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (84, '8', '6', 'soCb');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (85, '8', '7', 'sfCb');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (86, '8', '8', 'vCb');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (87, '8', '9', 'caCb');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (88, '8', '10', 'hCb');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (89, '9', '0', 'R');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (90, '9', '1', 'fR');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (91, '9', '2', 'mR');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (92, '9', '3', 'cR');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (93, '9', '4', 'bkR');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (94, '9', '5', 'syR');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (95, '9', '6', 'soR');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (96, '9', '7', 'sfR');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (97, '9', '8', 'vR');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (98, '9', '9', 'caR');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (99, '9', '10', 'hR');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (100, '11', '0', 'L');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (101, '11', '1', 'fL');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (102, '11', '2', 'mL');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (103, '11', '3', 'cL');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (104, '11', '4', 'bkL');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (105, '11', '5', 'syL');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (106, '11', '6', 'soL');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (107, '11', '7', 'sfL');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (108, '11', '8', 'vL');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (109, '11', '9', 'caL');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (110, '11', '10', 'hL');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (111, '14', '0', 'Co');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (112, '14', '1', 'fCo');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (113, '14', '2', 'mCo');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (114, '14', '3', 'cCo');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (115, '14', '4', 'bkCo');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (116, '14', '5', 'syCo');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (117, '14', '6', 'soCo');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (118, '14', '7', 'sfCo');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (119, '14', '8', 'vCo');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (120, '14', '9', 'caCo');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (121, '14', '10', 'hCo');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (122, '17', '0', 'Sh');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (123, '17', '1', 'fSh');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (124, '17', '2', 'mSh');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (125, '17', '3', 'cSh');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (126, '17', '4', 'bkSh');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (127, '17', '5', 'sySh');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (128, '17', '6', 'soSh');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (129, '17', '7', 'sfSh');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (130, '17', '8', 'vSh');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (131, '17', '9', 'caSh');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (132, '17', '10', 'hSh');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (133, '18', '0', 'Bo');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (134, '18', '1', 'fBo');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (135, '18', '2', 'mBo');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (136, '18', '3', 'cBo');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (137, '18', '4', 'bkBo');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (138, '18', '5', 'syBo');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (139, '18', '6', 'soBo');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (140, '18', '7', 'sfBo');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (141, '18', '8', 'vBo');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (142, '18', '9', 'caBo');
INSERT INTO enc.natsurf ("fid", "natsurt", "natquat", "etiq") VALUES (143, '18', '10', 'hBo');
COMMIT;

Creating the update_sbdare function

Load the following query into a pgAdmin SQL window and execute it. It is assumed that you have created an ‘enc’ schema for your database. If not, edit the table, replacing ‘enc.’ with ‘your_schema’.

update_sbdare.sql



CREATE OR REPLACE FUNCTION update_sbdare()
RETURNS VOID AS
$$
DECLARE
rec_row RECORD;
natsurt VARCHAR[4];
natquat VARCHAR[4];
S VARCHAR[3];
flag INT := 0;
etiq VARCHAR(25);
etiquet VARCHAR(25);
BEGIN
-- Vérifier si le champ 'Label' existe déjà
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_schema = 'enc2' AND table_name = 'pt_sbdare' AND column_name = 'label'
) THEN

-- Parcourir les lignes de la table enc.pt_sbdare
FOR rec_row IN SELECT * FROM enc2.pt_sbdare WHERE label IS NULL LOOP
    -- Réinitialiser les variables à chaque itération de la boucle
    natsurt := ARRAY['','','',''];
    S := ARRAY[',',',',','];
    flag :=0;

    -- Extraire les parties de natsur
    FOR i IN 1..4 LOOP
        --sortie anticipée
        IF i=4 AND flag=1 THEN
            EXIT;
        END IF;

        -- Vérifier si rec_row.natsur[i] est NULL
        IF rec_row.natsur[i] IS NULL THEN
            IF flag=0 THEN
                natsurt[i] := '0';
            ELSE
                natsurt[i+1] := '0';
            END IF;
        -- Vérifier si rec_row.natsur[i] contient un '/'
        ELSIF strpos(rec_row.natsur[i], '/') = 0 THEN
            IF flag=0 THEN
                natsurt[i] := rec_row.natsur[i];
                IF rec_row.natqua[i] IS NOT NULL THEN
                    natquat[i] := rec_row.natqua[i];
                ELSE
                    natquat[i] := '0';
                END IF;
            ELSE
                natsurt[i+1] := rec_row.natsur[i];
                IF rec_row.natqua[i] IS NOT NULL THEN
                    natquat[i+1] := rec_row.natqua[i];
                ELSE
                    natquat[i] := '0';
                END IF;
            END IF;

        ELSE
            -- Extraire les parties avant et après le '/'
            IF i < 5 THEN
                natsurt[i] := split_part(rec_row.natsur[i], '/', 1);
                natsurt[i+1] := split_part(rec_row.natsur[i], '/', 2);
                IF rec_row.natqua[i] IS NOT NULL THEN
                    natquat[i] := rec_row.natqua[i];
                ELSE
                    natquat[i] := '0';
                END IF;
                S[i] := '/';
                flag :=1; -- nous avons trouvé une valeur avec un '/'
            ELSE
                natsurt[i] := split_part(rec_row.natsur[i], '/', 1);
            END IF;
        END IF;
    END LOOP;

    etiquet :='';
    etiq :='';

    FOR i IN 1..4 LOOP
        IF natsurt[i] <> '0' THEN
            -- Exécutez la requête SQL pour récupérer le label en fonction de natsurt[i] et natquat[i]
            EXECUTE 'SELECT etiq FROM enc2.natsurf WHERE NATSURT = $1 AND NATQUAT = $2' INTO etiq USING natsurt[i], natquat[i];

            IF i =1 THEN
                etiquet := etiq ;
            ELSE
                etiquet := etiquet || S[i-1] || etiq;
            END IF;
        END IF;
    END LOOP;

    -- Mettre à jour la ligne avec les valeurs extraites
    UPDATE enc2.pt_sbdare
    SET
        label = etiquet
    WHERE pt_sbdare.ogc_fid = rec_row.ogc_fid;
END LOOP;

END IF;
END;
$$
LANGUAGE plpgsql;

Manual execution of the function for the first update

In a pgAdmin SQL window, enter the query

ALTER TABLE enc.pt_sbdare ADD COLUMN label VARCHAR(25);

SELECT update_sbdare()

The result will be the ‘label’ attribute in the pt_sbdare table. Subsequently, when you run clone_tables_with_prefix(), the update will be performed automatically.

Si cet article vous a intéressé et que vous pensez qu'il pourrait bénéficier à d'autres personnes, n'hésitez pas à le partager sur vos réseaux sociaux en utilisant les boutons ci-dessous. Votre partage est apprécié !

Leave a Reply

Your email address will not be published. Required fields are marked *