diff --git a/.gitignore b/.gitignore
index a304de2..6d57d71 100644
--- a/.gitignore
+++ b/.gitignore
@@ -38,7 +38,10 @@
/data_processing_rcode/data/
/data_processing_rcode/output/
/data_processing_rcode/notforgit/
-
+/ArcGIS-Analysis-Python/
+.vscode
+.github
+.pre-commit-config.yaml
/code/.quarto/
*.zip
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
new file mode 100644
index 0000000..c2bc8c9
--- /dev/null
+++ b/.pre-commit-config.yaml
@@ -0,0 +1,23 @@
+# .pre-commit-config.yaml
+repos:
+ - repo: https://github.com/pre-commit/pre-commit-hooks
+ rev: v4.5.0 # Use the latest version
+ hooks:
+ - id: trailing-whitespace
+# - id: end-of-file-fixer
+ - id: check-yaml
+ - id: check-added-large-files
+ - id: check-json
+ - id: detect-private-key
+# - repo: https://github.com/psf/black
+# rev: 24.3.0 # Use the latest version
+# hooks:
+# - id: black
+# - repo: https://github.com/PyCQA/flake8
+# rev: 7.0.0 # Use the latest version
+# hooks:
+# - id: flake8
+# - repo: https://github.com/PyCQA/isort
+# rev: 5.13.2 # Use the latest version
+# hooks:
+# - id: isort
diff --git a/.vscode/settings.json b/.vscode/settings.json
new file mode 100644
index 0000000..bcb0432
--- /dev/null
+++ b/.vscode/settings.json
@@ -0,0 +1,3 @@
+{
+ "python.defaultInterpreterPath": "C:\\Uses\\john.f.kennedy\\AppData\\Local\\ESRI\\conda\\envs\\arcgispro-py3-clone\\python.exe"
+}
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/.gitignore b/ArcGIS-Analysis-Python/.gitignore
index f80c368..7fc1c4c 100644
--- a/ArcGIS-Analysis-Python/.gitignore
+++ b/ArcGIS-Analysis-Python/.gitignore
@@ -1,187 +1,42 @@
-#################
-## Visual Studio
-#################
+# Global ignore: everything
+*
-## Ignore Visual Studio temporary files, build results, and
-## files generated by popular Visual Studio add-ons.
+# allow directories so we can whitelist specific content
+!*/
-# User-specific files
-*.suo
-*.user
-*.sln.docstates
-
-# Build results
-
-[Dd]ebug/
-[Rr]elease/
-x64/
-build/
-[Bb]in/
-[Oo]bj/
-
-# MSTest test Results
-[Tt]est[Rr]esult*/
-[Bb]uild[Ll]og.*
-
-*_i.c
-*_p.c
-*.ilk
-*.meta
-*.obj
-*.pch
-*.pdb
-Initial Data
-Data
-*.pgd
-*.rsp
-*.sbr
-*.tlb
-*.tli
-*.tlh
-*.tmp
-*.tmp_proj
-*.log
-NCEI Archive
-*.pidb
-*.log
-*.scc
-
-# Visual C++ cache files
-ipch/
-*.aps
-*.ncb
-*.opensdf
-*.sdf
-*.cachefile
-
-# Visual Studio profiler
-*.psess
-*.vsp
-*.vspx
-
-#############
-## Windows detritus
-#############
-
-# Windows image file caches
-src/dismap_tools_dev
-ehthumbs.db
-
-# Folder config file
-!src/dismap_tools/*.py
-!src/dismap_tools_dev/*.py
-!src/dismap_tools/dev_*.py
-!src/dismap_tools/__init__.py
-!src/dismap_tools/README.md
-
-# Mac crap
-.DS_Store
-
-#############
-## Python
-#############
-
-*.py[co]
-*.pyc
-# Packages
-*.egg
-*.egg-info
-dist/
-build/
-eggs/
-parts/
-var/
-sdist/
-develop-eggs/
-.installed.cfg
-
-# Installer logs
-pip-log.txt
-
-# Unit test / coverage reports
-.coverage
-.tox
-
-#Translations
-*.mo
-
-#Mr Developer
-.mr.developer.cfg
-
-# Tools, notes, outputs
-___*.*
-___*
-*.log
+# Remove selected directories
+.ipynb_checkpoints
+**/.ipynb_checkpoints
+.backups
+__pycache__
+**/__pycache__
-[Ss]ampleData/
-source/[Ss]ampleData/
-source/[Tt]est[Cc]onfigs/
-#src/service_monitor
-error.txt
-#############
-## ArcGIS
-#############
-*.gdb
-*.ipynb
-ImportLog
-*.aprx
-*.atbx
-*.mxd
-*.json
+# include python files anywhere
+!*.py
+!*.pyt
+!**/*.py
+!**/*.pyt
-#############
-## ArcGIS-Analysis-Python ignores
-#############
-# Ignore Folders
-.backups
-.ipynb_checkpoints
-__pycache__
-April 1 2023
-August 1 2025
-Bathymetry
-Initial Data
-Dataset Shapefiles
-December 1 2024
-February 1 2026
-GpMessages
-ImportLog
-Index
-Initial Data
-July 1 2024
-Layout
-May 16 2022
-NCEI Archive
-Notebooks
-RasterFunctionsHistory
-RasterFunctionTemplates
-Scratch
-# Ignore Files
-*.ags
-*.docx
-.pyHistory
-*.py
+# still ignore py backup variants
*.~py
-*.pyt
-*.xml
-*.xsl
-*.zip
+**/*.~py
+# ignore Python init files globally
__init__.py
-*.txt
-conftest.py
-main.py
-setip.py
-utils.py
-Metadata.md
-src/dismap_tools/esri
-NCEI Archive
-_Dataset Shapefiles
-#src/dismap_tools_dev
-#############
-# Do not ignore these files
+**/__init__.py
+
+# include Scripts folder and its contents
+!/Scripts/dismap_tools
+/Scripts/dismap_tools_dev/*
+
+# include README files in root, Scripts, and dismap_tools
!README.md
-!src/dismap_tools/*.py
-!src/dismap_tools_dev/*.py
-src/dismap_tools/dev_*.py
-src/dismap_tools/__init__.py
-!src/dismap_tools/README.md
+!Scripts/README.md
+!Scripts/dismap_tools/README.md
+
+# Notebook folder and its notebooks
+/Notebooks/
+# explicitly exclude esri folder under dismap_tools
+/Scripts/dismap_tools/esri/
+/src/
+/Initial Data/*.csv
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/Scripts/README.md b/ArcGIS-Analysis-Python/Scripts/README.md
new file mode 100644
index 0000000..f4ad17f
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/README.md
@@ -0,0 +1,23 @@
+# DisMAP ArcGIS Pro Analysis using Python
+> This code is always in development. Find the code used for various reports in the code [releases](https://github.com/nmfs-fish-tools/DisMAP/releases).
+
+### Explanation of this Folder
+* dismap_tools folder - This folder contains the current version of Python scripts for generating the interpolated biomass and calculating the distribution indicators (latitude, depth, range limits, etc).
+
+#### Suggestions and Comments
+
+If you see that the data, product, or metadata can be improved, you are invited to create a [pull request](https://github.com/nmfs-fish-tools/DisMAP/pulls) or [submit an issue to the code’s repository](https://github.com/nmfs-fish-tools/DisMAP/issues).
+
+#### NOAA-NMFS GitHub Enterprise Disclaimer
+
+This repository is a scientific product and is not official communication of the National Oceanic and Atmospheric Administration, or the United States Department of Commerce. All NOAA GitHub project code is provided on an ‘as is’ basis and the user assumes responsibility for its use. Any claims against the Department of Commerce or Department of Commerce bureaus stemming from the use of this GitHub project will be governed by all applicable Federal law. Any reference to specific commercial products, processes, or services by service mark, trademark, manufacturer, or otherwise, does not constitute or imply their endorsement, recommendation or favoring by the Department of Commerce.
+The Department of Commerce seal and logo, or the seal and logo of a DOC bureau, shall not be used in any manner to imply endorsement of any commercial product or activity by DOC or the United States Government.
+
+#### NOAA License
+
+Software code created by U.S. Government employees is not subject to copyright in the United States (17 U.S.C. §105). The United States/Department of Commerce reserve all rights to seek and obtain copyright protection in countries other than the United States for Software authored in its entirety by the Department of Commerce. To this end, the Department of Commerce hereby grants to Recipient a royalty-free, nonexclusive license to use, copy, and create derivative works of the Software outside of the United States.
+
+
+
+[U.S. Department of Commerce](https://www.commerce.gov/) \| [National Oceanographic and Atmospheric Administration](https://www.noaa.gov) \| [NOAA Fisheries](https://www.fisheries.noaa.gov/)
+
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/README.md b/ArcGIS-Analysis-Python/Scripts/dismap_tools/README.md
new file mode 100644
index 0000000..32ce0ce
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/README.md
@@ -0,0 +1,1933 @@
+# DisMAP ArcGIS Pro Analysis using Python
+> This code is always in development. Find the code used for various reports in the code [releases](https://github.com/nmfs-fish-tools/DisMAP/releases).
+
+### Table of contents ###
+
+> - [*Purpose*](#purpose)
+> - [*DisMAP ArcGIS Python Processing Setup*](#dismap-arcigs-python-processing-setup)
+> - [Zip and Unzip CSV Data](#zip-and-unzip-csv-data)
+> - [Zip and Unzip Shapefile Data](#zip-and-unzip-shapefile-data)
+> - [DisMAP Tools](#dismap-tools)
+> - [DisMAP Project Setup](#dismap-project-setup)
+> - [Create Base Bathymetry](#create-base-bathymetry)
+> - [Create Data Dictionary JSON Files](#create-data-dictionary-json-files)
+> - [Create Metadata JSON Files](#create-metadata-json-files)
+> - [Import Datasets Species Filter CSV Data](#import-datasets-species-filter-csv-data)
+> - [*DisMAP ArcGIS Python Processing*](#dismap-arcigs-python-processing)
+> - [Create Regions from Shapefiles Director and Worker](#create-regions-from-shapefiles-director-and-worker)
+> - [Create Region Fishnets Director and Worker](#create-region-fishnets-director-and-worker)
+> - [Create Region Bathymetry Director and Worker](#create-region-bathymetry-director-and-worker)
+> - [Create Region Sample Locations Director and Worker](#create-region-sample-locations-director-and-worker)
+> - [Create Region Species Year Image Name Table Director and Worker](#create-region-species-year-image-name-table-director-and-worker)
+> - [Create Region Rasters Director and Worker](#create-region-rasters-director-and-worker)
+> - [Create Region Species Richness Director and Worker](#create-region-species-richness-director-and-worker)
+> - [Create Region Mosaics Director and Worker](#create-region-mosaics-director-and-worker)
+> - [Create Region Indicators Table Direcor and Worker](#create-region-indicators-table-director-and-worker)
+> - [Publish to Portal Director](#publish-to-portal-director)
+> - [*Suggestions and Comments*](#suggestions-and-comments)
+> - [*NOAA README*](#noaa-readme)
+> - [*NOAA-NMFS GitHub Enterprise Disclaimer*](#noaa-nmfs-github-enterprise-disclaimer)
+> - [*NOAA License*](#noaa-license)
+
+### *Purpose*
+These Python scripts were developed for the DisMAP ArcGIS Python Processing phase of the project. In general the scripts listed below are ran in the order they are presented in a Python IDE such as [*Pyscripter*](https://sourceforge.net/projects/pyscripter/).
+
+### *DisMAP ArcGIS Python Processing Project Setup*
+- #### Zip and Unzip CSV Data
+ - The [zip_and_unzip_csv_data.py](zip_and_unzip_csv_data.py) file archives/extracts sample location and biomass measurements in a CSV data file
+ - The script takes the target location of were the file will be extracted and the source Zip file path
+ - Extracts CSV survey data from a ZIP archive, renames files, and attaches metadata:
+ 1. **Extract ZIP** — Unzips files from source ZIP (e.g., `CSV Data 2025 08 01.zip`) into the project's `CSV_Data` folder
+ 2. **Rename CSV files** — Copies extracted `*_survey.csv` files and renames them to `*_IDW.csv` (e.g., `AI_IDW_survey.csv` → `AI_IDW.csv`)
+ 3. **Clean up** — Removes the temporary `python/` extraction subdirectory
+ 4. **Attach metadata** — For each `*_IDW.csv` file:
+ - Synchronizes ArcGIS metadata
+ - Imports contact/organizational metadata from `DisMAP Contacts 2025 08 01.xml`
+ - Parses XML with lxml, sorts elements by a predefined `root_dict` order
+ - Writes updated metadata back to file
+ 5. **Return path** — Returns the `CSV_Data` output directory path
+
+ - Uses ArcGIS tool parameters and defaults to `~\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025`.
+
+- #### Zip and Unzip Shapefile Data
+ - The [zip_and_unzip_shapefile_data.py](zip_and_unzip_shapefile_data.py) file contains functions to archive/extract the shapefiles that represent each region
+ - The script takes the target location of were the file will be extracted and the source Zip file path
+ - No metadata processing or file renaming; straightforward extraction of shapefiles (e.g., `AI_IDW_Region.shp`, `EBS_IDW_Region.shp`, etc.) into their target directory
+ - Extracts shapefile data from a ZIP archive:
+ 1. **Extract ZIP** — Unzips files from source ZIP into the project's `Dataset_Shapefiles` folder (changes working directory with `os.chdir()`)
+ 2. **Return path** — Returns the `Dataset_Shapefiles` output directory path
+ - Uses ArcGIS tool parameters and defaults to `~\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025`.
+ - Follows ArcGIS tool conventions (parameter access via `arcpy.GetParameterAsText()`, messaging via `arcpy.AddMessage()`, error handling with `arcpy.ExecuteError`)
+
+- #### DisMAP Tools
+ - The [dismap_tools.py](dismap_tools.py) file is a utility library with XML/metadata parsing, field transformation, and spatial analysis helpers using lxml & ArcPy that is imported into many of the project scripts
+
+ **XML & Metadata Functions:**
+ - **`parse_xml_file_format_and_save(csv_data_folder, xml_file, sort)`** — Parses and reformats XML metadata using lxml; optionally sorts elements by predefined priority order from `root_dict.json`; writes back to file
+ - **`print_xml_file(xml_file, sort)`** — Displays formatted XML metadata to ArcGIS messages
+ - **`compare_metadata_xml(file1, file2)`** — Compares two metadata XML files
+ - **`export_metadata(csv_data_folder, in_table)`** — Exports ArcGIS metadata from a dataset
+ - **`import_metadata(csv_data_folder, dataset)`** — Imports standardized metadata from JSON definitions into GDB dataset
+
+ **Field & Table Management:**
+ - **`add_fields(csv_data_folder, in_table)`** — Adds fields to a GDB table based on schema from `table_definitions.json` + `field_definitions.json`
+ - **`alter_fields(csv_data_folder, in_table)`** — Updates field aliases and properties on existing fields
+ - **`field_definitions(csv_data_folder, field)`** — Loads field schema dictionary from `field_definitions.json`
+ - **`dTypesCSV(csv_data_folder, table)`** — Returns pandas data types for CSV columns based on table schema
+ - **`dTypesGDB(csv_data_folder, table)`** — Returns NumPy data types for GDB fields based on table schema
+
+ **Data Inspection & Utilities:**
+ - **`check_datasets(datasets)`** — Logs detailed metadata (extent, cell size, date created, spatial reference, sample rows) for feature classes, rasters, and tables
+ - **`check_transformation(ds, cs)`** — Validates spatial reference transformation compatibility
+ - **`get_transformation(gsr_wkt, psr_wkt)`** — Gets geographic transformation between two coordinate systems
+ - **`clear_folder(folder)`** — Removes all files from a folder
+ - **`backup_gdb(project_gdb)`** — Creates a backup copy of geodatabase and compacts both
+
+ **Data Lookup & Conversion:**
+ - **`date_code(version)`** — Generates/extracts DateCode from project version name (e.g., `"August 1 2025"` → `"20250801"`)
+ - **`convertSeconds(seconds)`** — Converts seconds to HH:MM:SS format
+ - **`get_encoding_index_col(csv_file)`** — Detects CSV file encoding using `chardet`
+ - **`dataset_title_dict(project_gdb)`** — Builds lookup dictionary mapping region/dataset codes to display titles
+ - **`metadata_dictionary_json(csv_data_folder, dataset_name)`** — Loads metadata templates from JSON
+ - **`table_definitions(csv_data_folder, field)`** — Loads table-to-fields schema mapping from JSON
+
+ **Pattern**: Serves as central helper library for other DisMAP tools; heavily uses JSON schema files (`field_definitions.json`, `table_definitions.json`) as single source of truth for GDB structure; consistent exception handling with ArcPy logging; all functions clean up local variables at completion.
+
+- #### DisMAP Project Setup
+ - The [dismap_project_setup.py](dismap_project_setup.py) ArcGIS/ArcPy/Python script creates ArcGIS Pro project folder structure: GDB, scratch workspace, subfolders, and configures toolboxes/databases in `.aprx`
+ - The input for the script is the Project Folder path (i.e. "Documents/ArcGIS/Projects/DisMAP/ArcGIS-Analysis-Python/December 1 2025")
+
+ **Main function: `script_tool(new_project_folder, project_folders)`**
+
+ 1. **Get home folder**
+ - Accesses current ArcGIS Pro project's home folder via `arcpy.mp.ArcGISProject("CURRENT")`
+
+ 2. **Create project folder structure**:
+ - Creates main project folder (e.g., `"September 1 2025"`)
+ - Creates project GDB: `{new_project_folder}\{new_project_folder}.gdb`
+ - Creates Scratch folder: `{new_project_folder}\Scratch`
+ - Creates scratch GDB: `Scratch\scratch.gdb`
+ - Creates subfolders from comma-separated list (e.g., `CRFs;CSV_Data;Dataset_Shapefiles;Images;Layers;Metadata_Export;Publish`)
+
+ 3. **Configure `.aprx` file**:
+ - Saves a copy of the current project as `{new_project_folder}.aprx`
+ - Opens the new APRX file
+ - Removes all existing maps
+ - Updates project databases: sets the new project GDB as default database
+ - Updates project toolboxes: registers `DisMAP.atbx` as default toolbox
+ - Saves the configured APRX
+
+ 4. **Parameters**:
+ - `new_project_folder` — Name of new project folder (defaults to `"September 1 2025"`)
+ - `project_folders` — Semicolon-separated subfolder names (defaults to `"CRFs;CSV_Data;Dataset_Shapefiles;Images;Layers;Metadata_Export;Publish"`)
+
+ **Pattern**: Designed to be called from ArcGIS Pro as a script tool; uses standard toolbox parameter conventions (`arcpy.GetParameterAsText()`, `arcpy.SetParameterAsText()`) and follows DisMAP folder/naming conventions (date-based project folders, standard GDB/subfolder structure).
+
+- #### Create Base Bathymetry
+ - The [create_base_bathymetry.py](create_base_bathymetry.py) ArcGIS/ArcPy/Python script processes bathymetry data for use in later scripts
+ - The input for the script is the Project Folder path (i.e. "Documents/ArcGIS/Projects/DisMAP/ArcGIS-Analysis-Python/December 1 2025")
+ - All functions follow ArcGIS Pro logging/error handling patterns, manage ArcPy environments (workspace, cellsize, resampling), and clean up local variables at the end.
+ - The script contains four main functions:
+
+ 1. **`raster_properties_report()`** — Utility that logs raster metadata (spatial reference, extent, cell size, statistics, pixel type) using `arcpy.AddMessage()`.
+
+ 2. **`create_alasaka_bathymetry(project_folder)`** — Processes Alaska bathymetry:
+ - Copies ASCII GRID files (AI, EBS, GOA) into a geodatabase
+ - Converts positive depth values to negative
+ - Appends/clips rasters to ensure full coverage
+ - Reprojects each region's bathymetry to its regional spatial reference (AI_IDW, EBS_IDW, etc.)
+ - Copies final rasters to the project bathymetry GDB and compacts the database
+
+ 3. **`create_hawaii_bathymetry(project_folder)`** — Processes Hawaii bathymetry:
+ - Converts a polygon shapefile grid (BFISH_PSU.shp) to a raster using depth field
+ - Negates values and saves to GDB
+ - Copies final rasters to the project bathymetry GDB and compacts the database
+
+ 4. **`gebco_bathymetry(project_folder)`** — Processes GEBCO data for all other regions:
+ - Converts GEBCO ASCII rasters to GDM rasters for each region
+ - Copies final rasters to the project bathymetry GDB and compacts the database
+
+- #### Create Data Dictionary JSON Files
+ - The [create_data_dictionary_json_files.py](create_data_dictionary_json_files.py) ArcGIS/ArcPy/Python script that generates JSON metadata definitions for all GDB tables and fields in the DisMAP project:
+
+ ***Main function: `script_tool(project_folder)`***
+
+ 1. **Field Definitions Dictionary** (massive hardcoded dict covering ~200+ fields):
+ - Defines every GDB field used in DisMAP with metadata:
+ - `field_aliasName`, `field_name`, `field_type`, `field_length`, `field_precision`, `field_scale`
+ - `field_editable`, `field_isNullable`, `field_required`
+ - `field_attrdef`, `field_attrdefs`, `field_attrdomv` (ArcGIS ISO metadata attributes)
+ - `field_domain` (for referential integrity links)
+ - Examples: `CSVFile`, `Category`, `CellSize`, `CenterOfGravityDepth`, `CommonName`, `DateCode`, `Depth`, `Dimensions`, etc.
+
+ 2. **Table Field Mappings** (hardcoded list patterns):
+ - Defines which fields belong to each table type:
+ - `_Datasets` — dataset catalog fields (DatasetCode, CSVFile, TableName, Region, Season, DateCode, etc.)
+ - `_Indicators` — distribution indicator fields (CenterOfGravity*, MinimumLatitude, MaximumDepth, etc.)
+ - `_IDW` — Inverse Distance Weighting survey data (Species, WTCPUE, MapValue, Coordinates, Depth)
+ - `_Sample_Locations` — sample point data (SampleID, Year, Species, WTCPUE, Stratum, Coordinates)
+ - `_Species_Filter` — species metadata (Species, CommonName, TaxonomicGroup, FilterRegion, ManagementBody)
+ - `_DisMAP_Survey_Info` — survey metadata (SurveyName, Region, Season, GearType, Years, DataSource)
+ - `_SpeciesPersistenceIndicatorTrend`, `_SpeciesPersistenceIndicatorPercentileBin` — persistence indicators
+ - Region-specific: `_Boundary`, `_Extent_Points`, `_Fishnet`, `_Lat_Long`, `_Mosaic`, `_Raster_Mask`
+
+ 3. **Dynamic Table-to-Fields Mapping**:
+ - Iterates over hardcoded table names (15 IDW regions + 6 metadata tables)
+ - For each `*_IDW` region: assigns fields + auto-generates derived table variants (Sample_Locations, Indicators, Bathymetry, Boundary, Extent_Points, Fishnet, LayerSpeciesYearImageName, Lat_Long, Latitude, Longitude, Mosaic, Raster_Mask, Region)
+ - Special handling for metadata tables (Datasets, DisMAP_Regions, Indicators, Species_Filter, DisMAP_Survey_Info, SpeciesPersistenceIndicatorTrend, SpeciesPersistenceIndicatorPercentileBin)
+
+ 4. **JSON Output**:
+ - Exports `field_definitions.json` → all field metadata (keyed by field name)
+ - Exports `table_definitions.json` → all table field lists (keyed by table name, values are lists of field names)
+ - Both written to `{project_folder}\CSV_Data\`
+
+ 5. **Cleanup & Validation**:
+ - Logs all table/field mappings
+ - Cross-checks that all mapped fields exist in `field_definitions`
+ - Compacts GDB
+
+ **Key Pattern**: Serves as schema/metadata registry for the entire DisMAP project; JSON files are consumed by downstream tools (e.g., import_datasets_species_filter_csv_data.py loads field schemas) to enforce consistent data types and field properties across all CSV imports and GDB operations.
+
+- #### Create Metadata JSON Files
+ - The [create_metadata_json_files.py](create_metadata_json_files.py) ArcGIS/ArcPy/Python script that generates standardized XML metadata ordering dictionaries and contact information JSON files for ArcGIS Pro metadata templates:
+
+ **Main function: `script_tool(project_gdb)`**
+ Creates and exports multiple JSON lookup/mapping files to `{project_folder}\CSV_Data\`:
+
+ 1. **`root_dict.json`**
+ - XML element priority ordering for top-level ISO 19139 metadata sections:
+ - Maps 21 root elements to sort order (e.g., `Esri: 0`, `dataIdInfo: 1`, `dqInfo: 2`, `distInfo: 3`)
+ - Used to normalize/sort XML metadata trees consistently
+
+ 2. **`esri_dict.json`** — ArcGIS Esri-specific metadata element ordering:
+ - Nested structure for `DataProperties` (lineage, itemProps, nativeExtBox, itemLocation, coordRef)
+ - Maps ~15 Esri metadata subelements to sort priorities
+
+ 3. **`dataIdInfo_dict.json`** — ISO 19139 Data Identification nested element hierarchy:
+ - Complex nested mapping for keyword sections (discKeys, themeKeys, placeKeys, tempKeys, otherKeys)
+ - Maps spatial representation, data extent, temporal elements, and citation structures
+
+ 4. **`idCitation_dict.json`** — Resource citation element ordering:
+ - Maps citation subelements (resTitle, date, presForm, citRespParty) to priorities
+
+ 5. **`contact_element_order_dict.json`** — Contact/responsible party element ordering:
+ - Defines sort order for ~35 contact metadata fields (name, organization, email, phone, role, citation info)
+
+ 6. **`distInfo_dict.json`** — Distribution information element ordering:
+ - Maps distribution channel metadata (distorFormat, distorCont, distTranOps)
+
+ 7. **`RoleCd_dict.json`** — Role code lookup table:
+ - Maps codes (`"001"` → `"Resource Provider"`, `"007"` → `"Point of Contact"`, etc.) for 15 ISO roles
+
+ 8. **`tpCat_dict.json`** — ISO Topic Category XML snippets:
+ 3- Maps topic codes to pre-formatted XML strings (e.g., `"002"` → `......`)
+
+ 9. **`contact_dict.json`** — DisMAP team hardcoded contact information:
+ - Defines role-based contacts: Custodian, Point of Contact, Distributor, Author, Principal Investigator, Processors
+ - Names: Timothy J Haverland, Melissa Ann Karp, John F Kennedy (with @noaa.gov emails)
+ - Used to populate metadata for all DisMAP datasets
+
+ **Pattern**: All dicts are created, serialized to JSON, then immediately re-read to verify round-trip integrity. These JSON files are consumed by other tools (e.g., import_datasets_species_filter_csv_data.py, dismap_metadata_processing.py) to ensure consistent metadata formatting and sort order across all GDB objects and portal publications.
+
+- #### Import Datasets Species Filter CSV Data
+ - The [import_datasets_species_filter_csv_data.py](import_datasets_species_filter_csv_data.py) ArcGIS/ArcPy/Python script creates ArcGIS Pro project folder structure: GDB, scratch workspace, subfolders, and configures toolboxes/databases in `.aprx`
+ - The input for the script is the Project Folder path (i.e. "Documents/ArcGIS/Projects/DisMAP/ArcGIS-Analysis-Python/December 1 2025") and CSV files for:
+ 1. Datasets
+ 2. Species_Filter
+ 3. DisMAP_Survey_Info
+ 4. SpeciesPersistenceIndicatorTrend
+ 5. SpeciesPersistenceIndicatorPercentileBin
+ - Multi-function script that imports survey metadata CSVs into an ArcGIS Pro GDB and manages related utilities:
+
+ 1. **`get_encoding_index_col(csv_file)`** — Detects CSV encoding using `chardet` and identifies index column:
+ - Reads raw file bytes and auto-detects character encoding
+ - Uses pandas to load CSV and check if first column is `"Unnamed: 0"` (pandas-generated index)
+ - Returns encoding and index column position
+
+ 2. **`worker(project_gdb, csv_file)`** — Main worker function; converts CSV to GDB table:
+ - Validates GDB and CSV exist; sets ArcGIS logging/environment
+ - Uses `dismap_tools` helper functions to load CSV dtypes and GDB field schema
+ - Loads CSV with pandas, handling encoding/index column detection
+ - Replaces NaN with empty strings; strips whitespace
+ - Converts pandas DataFrame to NumPy structured array matching GDB field types
+ - Writes array to temporary in-memory table, then copies to GDB using `arcpy.da.NumPyArrayToTable()`
+ - Cleans up string fields (replaces None with empty strings)
+ - Calls `dismap_tools.alter_fields()` to adjust field properties
+ - Calls `dismap_tools.import_metadata()` to attach metadata from JSON definitions
+ - Compacts the GDB
+
+ 3. **`update_datecode(csv_file, project_name)`** — Updates DateCode values in CSV:
+ - Loads CSV with pandas
+ - Extracts old DateCode from first row
+ - Replaces all DateCode occurrences with new code based on project_name (using `dismap_tools.date_code()`)
+ - Writes updated CSV back to disk
+
+ 4. **`script_tool(project_folder)`** — Orchestrates full import workflow:
+ - Copies dated CSV files from `home_folder\Datasets\` to project's `CSV_Data\` folder (e.g., `Datasets_20250801.csv` → `Datasets.csv`)
+ - Loads metadata template mapping from `root_dict.json`
+ - Imports contact metadata from `DisMAP Contacts 2025 08 01.xml` into each CSV file
+ - Parses XML with lxml, sorts elements by priority order from `root_dict`
+ - Calls `update_datecode()` to update date codes in Datasets.csv
+ - Calls `worker()` for each of five tables: Datasets, Species_Filter, DisMAP_Survey_Info, SpeciesPersistenceIndicatorTrend, SpeciesPersistenceIndicatorPercentileBin
+ - Logs elapsed time
+
+ - Key pattern: Heavy use of pandas for CSV parsing + NumPy array conversion to ensure type safety between CSV and GDB; metadata synchronization with lxml XML manipulation
+
+### *DisMAP ArcGIS Python Processing*
+
+- #### Director/Worker Pattern (parallel-capable processing):
+ The codebase follows a **director** → **worker** architecture for scalability:
+- **Directors** orchestrate workflows and can spawn parallel jobs using `multiprocessing.Pool`
+ - create_rasters_director.py — Orchestrates raster creation; supports sequential or multiprocess execution
+ - create_regions_from_shapefiles_director.py — Builds region geometries
+ - create_region_bathymetry_director.py — Bathymetry preprocessing
+ - Similar pairs for: species richness, sample locations, fishnets, year/image name tables
+
+- **Workers** perform the actual ArcGIS/ArcPy operations:
+ - create_rasters_worker.py — Creates interpolated rasters per region
+ - create_regions_from_shapefiles_worker.py — Converts shapefiles to GDB regions
+ - And corresponding worker files for each director
+
+- **Pattern Notes:**
+ - All modules use raw f-strings with Windows paths (e.g., `rf"{project_folder}\Bathymetry\Bathymetry.gdb"`)
+ - Consistent error handling: ArcPy exception catching + traceback + `sys.exit()`
+ - Functions clear local variables at completion (`del var`) and warn about remaining keys
+ - ArcGIS Pro logging is configured (history, metadata, message levels)
+
+- ### Create Regions from Shapefiles Director and Worker
+ - This director/worker pair converts survey region shapefiles into feature classes within the DisMAP geodatabase.
+
+ - #### **Director (create_regions_from_shapefiles_director.py)**
+
+ - **Purpose**: Orchestrates the creation of region boundaries and extent polygons from shapefiles for all IDW regions (AI, EBS, GOA, GMEX, SEUS, etc.).
+
+ - **Key Functions:**
+
+ 1. **`create_dismap_regions(project_gdb)`** — Initializes the DisMAP_Regions polyline template feature class:
+ - Creates empty `DisMAP_Regions` feature class in project GDB (spatial reference: WGS_1984_Web_Mercator_Auxiliary_Sphere)
+ - Adds schema fields via `dismap_tools.add_fields()`
+ - Imports metadata via `dismap_tools.import_metadata()`
+
+ 2. **`director(project_gdb, Sequential, table_names)`** — Main orchestration function:
+ - **Pre-processing** (sequential):
+ - Calls `create_dismap_regions()` to initialize template
+ - For each region (e.g., "AI_IDW", "EBS_IDW"):
+ - Creates region-specific GDB and scratch workspace (`{scratch_folder}\{table_name}.gdb`)
+ - Copies `Datasets` table and `DisMAP_Regions` template to region GDB
+ - Synchronizes metadata via `arcpy.metadata.Metadata`
+
+ - **Sequential or parallel processing**:
+ - **Sequential mode**: Calls `worker()` for each region sequentially
+ - **Parallel mode**: Uses `multiprocessing.Pool` (processes = CPU count - 2) with `apply_async()` to spawn workers; monitors job completion with status polling every ~7.5×processes seconds; gracefully handles exceptions with pool termination
+
+ - **Post-processing** (sequential):
+ - Walks scratch folder for all generated feature classes (Polygon/Polyline)
+ - Copies each result to project GDB
+ - For `*_Boundary` feature classes, appends them to the master `DisMAP_Regions` feature class
+ - Compacts project GDB
+
+ 3. **`script_tool(project_gdb)`** — Entry point (ArcGIS Pro tool parameter wrapper):
+ - Supports test/dev toggles for processing specific regions (hardcoded table names or empty list)
+ - Calls `director()` with default parallel processing (`Sequential=False`)
+ - Logs timing and environment info
+
+ - #### **Worker (create_regions_from_shapefiles_worker.py)**
+
+ - **Purpose**: Processes a single region; converts a region's boundary shapefile to feature classes (Region polygon + Boundary polyline).
+
+ - **Key Steps in `worker(region_gdb)`:**
+
+ 1. **Extract region metadata** — Queries `Datasets` table to fetch region info:
+ - Fields: TableName, GeographicArea, DatasetCode, Region, Season, DistributionProjectCode
+ - Example result: `['AI_IDW', 'AI_IDW_Region', 'AI', 'Aleutian Islands', None, 'IDW']`
+
+ 2. **Set spatial reference** — Loads `.prj` file from `Dataset_Shapefiles\{table_name}\{geographic_area}.prj`:
+ - Adjusts cell size and XY tolerance based on linear unit (Kilometer → 1 cell size; Meter → 1000 cell size)
+
+ 3. **Create region feature class** — Creates POLYGON feature class:
+ - Uses `DisMAP_Regions` template as schema source
+ - Applies region-specific spatial reference
+
+ 4. **Append shapefile data** — Imports geometry and attributes from source shapefile:
+ - `arcpy.management.Append()` copies features from `{geographic_area}.shp` into the new feature class
+
+ 5. **Calculate region fields** — Populates DatasetCode, Region, Season, DistributionProjectCode fields with values from Datasets table
+
+ 6. **Create boundary feature class** — Uses `FeatureToLine()` to extract boundary polylines from the region polygon; deletes auto-generated FID field
+
+ 7. **Alter field metadata** — Calls `dismap_tools.alter_fields()` to set field aliases and properties for both Region and Boundary feature classes
+
+ 8. **Synchronize metadata** — Copies metadata from source region feature class to boundary feature class via `arcpy.metadata.Metadata`
+
+ 9. **Cleanup** — Deletes template tables (`DisMAP_Regions`, `Datasets`), compacts region GDB
+
+ - ##### **Key Architectural Patterns**
+
+ - **Scalable parallelism**: Director spawns region workers in parallel pool; each worker processes independently with isolated GDB workspaces
+ - **Metadata cascade**: Copies metadata templates between feature classes to maintain consistency
+ - **Region isolation**: Each worker uses separate `.gdb` to avoid locking; director merges results post-processing
+ - **Workspace cleanup**: Removes intermediate templates and compacts GDBs to minimize database size
+
+ - ##### **Integration Points**
+
+ - Depends on `Dataset_Shapefiles\{region}\{geographic_area}.shp/.prj` files
+ - Reads region metadata from `Datasets` table (populated by `import_datasets_species_filter_csv_data.py`)
+ - Outputs region boundaries appended to master `DisMAP_Regions` feature class
+ - Boundary feature classes available for fishnet creation and region extent workflows downstream
+
+ - ### Create Region Fishnets Director and Worker
+ - This director/worker pair generates fishnet grids, extent points, and latitude/longitude rasters for each survey region.
+
+ - #### **Director (create_region_fishnets_director.py)**
+
+ - **Purpose**: Orchestrates fishnet generation for all IDW regions, supporting parallel processing with batch grouping for CPU efficiency.
+
+ - **Key Functions:**
+
+ 1. **`director(project_gdb, Sequential, table_names)`** — Main orchestration function:
+ - **Pre-processing** (sequential):
+ - Validates that `Datasets` and `*_Region` feature classes exist and contain records
+ - Creates region-specific GDB and scratch workspaces (`{scratch_folder}\{table_name}.gdb`)
+ - Copies `Datasets` table and `{table_name}_Region` feature class to each region GDB
+
+ - **Sequential or parallel processing**:
+ - **Sequential mode**: Calls `worker()` sequentially for each region
+ - **Parallel mode**: Uses `multiprocessing.Pool` (processes = CPU count - 2, maxtasksperchild=1); monitors job completion with status polling every ~7.5×processes seconds
+
+ - **Post-processing** (sequential):
+ - Walks scratch folder collecting all generated datasets (rasters, feature classes)
+ - Copies each result to project GDB
+ - Deletes source intermediate files from scratch workspace
+ - Compacts each region GDB and the project GDB
+
+ 2. **`script_tool(project_gdb)`** — Entry point with hardcoded batch grouping:
+ - Supports test/dev toggles with region subsets
+ - **Production batches** (parallel): Processes regions in two parallel director calls:
+ - Batch 1: `WC_TRI_IDW, GMEX_IDW, AI_IDW, GOA_IDW, WC_ANN_IDW`
+ - Batch 2: `NEUS_SPR_IDW, EBS_IDW, NEUS_FAL_IDW, SEUS_SUM_IDW`
+ - Logs timing and environment info
+
+ - #### **Worker (create_region_fishnets_worker.py)**
+
+ **Purpose**: For a single region, creates fishnet grid cells, extent points, and geographic coordinate rasters.
+
+ **Key Outputs Generated:**
+
+ 1. **Raster Mask** (`{table_name}_Raster_Mask`):
+ - Converts region boundary polygon to raster using cell size from Datasets table
+ - Serves as template for coordinate rasters
+
+ 2. **Extent Points** (`{table_name}_Extent_Points`):
+ - Creates 3 corner points (lower-left, upper-left, upper-right) of region boundary
+ - Calculates both projected (Easting/Northing) and geographic (Longitude/Latitude) coordinates using `AddXY()` and field aliasing
+
+ 3. **Fishnet** (`{table_name}_Fishnet`):
+ - Creates regular grid of POLYGON cells (cell size from Datasets table)
+ - Overlays fishnet on region boundary and removes cells outside region using `SelectLayerByLocation()` with 2×cell_size buffer
+ - Result: Only cells intersecting the region are retained
+
+ 4. **Lat-Long Centroids** (`{table_name}_Lat_Long`):
+ - Extracts fishnet cell centroids as point feature class
+ - Calculates both projected (Easting/Northing) and geographic (Longitude/Latitude) coordinates
+
+ 5. **Latitude Raster** (`{table_name}_Latitude`):
+ - Converts Lat-Long point centroids to raster using "Latitude" field
+ - Extracts values within raster mask using `ExtractByMask()`
+
+ 6. **Longitude Raster** (`{table_name}_Longitude`):
+ - Converts Lat-Long point centroids to raster using "Longitude" field
+ - Extracts values within raster mask using `ExtractByMask()`
+
+ **Technical Details:**
+
+ - **Coordinate system handling**:
+ - Uses `EnvManager` context managers to temporarily override output coordinate system
+ - Converts from region's projected system (via `.prj` file) to WGS84 (EPSG:4326) for geographic coordinates
+ - Uses `dismap_tools.check_transformation()` to get geographic transformation parameters
+
+ - **Field renaming pattern**:
+ - `POINT_X` → `Easting` (projected) or `Longitude` (geographic)
+ - `POINT_Y` → `Northing` (projected) or `Latitude` (geographic)
+
+ - **Metadata & cleanup**:
+ - Calls `dismap_tools.alter_fields()` for field aliases
+ - Calls `dismap_tools.import_metadata()` to attach standardized metadata
+ - Synchronizes metadata via `arcpy.metadata.Metadata`
+ - Deletes intermediate `Datasets` and `{table_name}_Region` templates
+ - Compacts region GDB
+
+ ##### **Key Architectural Patterns**
+
+ - **Batch parallelism**: Director runs multiple fishnet directors in sequence but each director spawns workers in parallel pool
+ - **Coordinate duality**: Each feature class maintains both projected (Easting/Northing) and geographic (Longitude/Latitude) coordinates
+ - **Region masking**: Fishnet cells are intelligently filtered to align with region boundary
+ - **Raster template**: Raster mask constrains all downstream latitude/longitude raster generation
+
+ ##### **Integration Points**
+
+ - Reads region geometry from `{table_name}_Region` feature class (from `create_regions_from_shapefiles_worker`)
+ - Reads cell size and dataset metadata from `Datasets` table
+ - Outputs fishnet cells, extent points, and coordinate rasters available for downstream IDW interpolation
+ - Longitude/Latitude rasters used for geographic coordinate layers in map displays
+
+- ### Create Region Bathymetry Director and Worker
+ - This director/worker pair generates region-specific bathymetry rasters using zonal statistics over fishnet grids.
+
+ - #### **Director (create_region_bathymetry_director.py)**
+
+ - **Purpose**: Orchestrates bathymetry processing for all IDW regions, handling data staging and parallel worker coordination.
+
+ - **Key Functions:**
+
+ 1. **`director(project_gdb, Sequential, table_names)`** — Main orchestration function:
+ - **Pre-processing**:
+ - Calls `preprocessing()` to stage fishnet, raster mask, and bathymetry data for each region into separate GDBs
+ - Creates region-specific workspaces under Scratch folder
+
+ - **Sequential or parallel processing**:
+ - **Sequential mode**: Calls `worker()` sequentially for each region (currently commented/disabled)
+ - **Parallel mode**: Uses `multiprocessing.Pool` (processes = CPU count - 2, maxtasksperchild=1) to execute workers in parallel; monitors job completion with status polling every ~7.5×processes seconds
+
+ - **Post-processing** (sequential):
+ - Walks scratch folder collecting all generated raster datasets
+ - Copies each bathymetry raster to project GDB
+ - Calls `dismap_tools.alter_fields()` for feature classes/tables/mosaics
+ - Compacts project GDB
+
+ 2. **`script_tool(project_gdb)`** — Entry point with hardcoded batch grouping:
+ - **Production batches** (parallel): Processes regions in three parallel director calls:
+ - Batch 1: `NBS_IDW, ENBS_IDW, HI_IDW, SEUS_FAL_IDW, SEUS_SPR_IDW, SEUS_SUM_IDW`
+ - Batch 2: `WC_TRI_IDW, GMEX_IDW, AI_IDW, GOA_IDW, WC_ANN_IDW, NEUS_FAL_IDW`
+ - Batch 3: `NEUS_SPR_IDW, EBS_IDW`
+ - Logs timing and environment info
+
+ - #### **Worker (create_region_bathymetry_worker.py)**
+
+ This file contains three functions:
+
+ #### **1. `preprocessing(project_gdb, table_names, clear_folder)`**
+ Stages data for all regions into scratch workspace:
+ - Clears scratch folder if `clear_folder=True`
+ - For each region (e.g., `AI_IDW`), creates:
+ - Region-specific GDB: `{scratch_folder}\{table_name}.gdb`
+ - Region scratch workspace: `{scratch_folder}\{table_name}\scratch.gdb`
+ - Copies `Datasets` table from project GDB
+ - Copies `{table_name}_Fishnet` feature class from project GDB
+ - Copies `{table_name}_Raster_Mask` raster from project GDB
+ - Copies `{table_name}_Bathymetry` from `Bathymetry\Bathymetry.gdb` and renames to `{table_name}_Fishnet_Bathymetry`
+
+ #### **2. `worker(region_gdb)`**
+ Performs zonal statistics computation for a single region:
+
+ **Inputs:**
+ - `{table_name}_Fishnet` — Polygon feature class with fishnet grid cells (OID field)
+ - `{table_name}_Raster_Mask` — Template raster defining cell size, extent, and spatial reference
+ - `{table_name}_Fishnet_Bathymetry` — Bathymetry raster with depth values
+
+ **Processing:**
+ - Extracts cell size from raster mask metadata
+ - Sets environment: cell size, extent, mask, snapRaster from raster template
+ - Executes `arcpy.sa.ZonalStatistics()`:
+ - **Zone data**: Fishnet grid (one value per cell)
+ - **Zone field**: OID (unique cell identifier)
+ - **Value raster**: Bathymetry (depth values)
+ - **Statistics**: MEDIAN (median depth per cell)
+ - **Ignore NoData**: DATA (include NoData pixels in calculation)
+ - **Percentile**: 90th percentile (optional, for reference)
+
+ **Output:**
+ - `{table_name}_Bathymetry` — Raster with median depth value for each fishnet cell
+
+ **Cleanup:**
+ - Calls `dismap_tools.import_metadata()` to attach metadata
+ - Deletes intermediate files: `Datasets`, `Raster_Mask`, `Fishnet`, `Fishnet_Bathymetry`
+ - Compacts region GDB
+
+ #### **3. `script_tool(project_gdb)`**
+ Development entry point (currently runs single region test):
+ - Calls `preprocessing()` to stage data
+ - Calls `worker()` for test region (`HI_IDW`)
+ - Used for single-region testing/debugging
+
+ #### **Key Architectural Patterns**
+
+ - **Two-stage processing**: Preprocessing stages all data in parallel-safe workspaces, then workers compute independently
+ - **Raster template pattern**: Raster mask provides all spatial environment parameters (cell size, extent, SR) for consistency
+ - **Zonal statistics optimization**: Uses median depth to provide representative bathymetry per fishnet cell
+ - **Batch grouping**: Director distributes 15 regions across 3 parallel batches to balance load across CPU cores
+
+ #### **Integration Points**
+
+ - **Inputs**:
+ - Fishnet grid from `create_region_fishnets_worker` outputs
+ - Raster mask from `create_region_fishnets_worker` outputs
+ - Bathymetry raster from `create_base_bathymetry.py` or bathymetry GDB
+
+ - **Outputs**:
+ - `{table_name}_Bathymetry` rasters available for downstream analysis and visualization
+ - Rasters propagated to project GDB for inclusion in maps and services
+
+ - **Data flow**:
+ ```
+ Bathymetry GDB → preprocessing() → region GDB → worker() → ZonalStatistics
+ Fishnet ─────────────────────────→ region GDB → worker() → (zone field)
+ Raster Mask ───────────────────────→ region GDB → worker() → (environment template)
+ ```
+
+ #### **Technical Details**
+
+ - **Zonal Statistics parameters**:
+ - Ignores NoData in value raster (DATA mode) so pixels with No Data still participate in zone calculations
+ - Uses MEDIAN to handle outliers better than MEAN for depth measurements
+ - Produces one output value per zone (fishnet cell) linked by OID
+
+ - **Environment management**:
+ - Uses `EnvManager` context manager for scratch workspace isolation
+ - Cell size derived from raster metadata: `arcpy.Describe(raster/Band_1).meanCellWidth`
+ - Linear unit handling: Kilometer → cell size 1 km; Meter → cell size 1000 m
+
+- ### Create Region Sample Locations Director and Worker
+ - This director/worker pair creates sample location point feature classes from survey data CSVs, converting raw data into spatial datasets with standardized fields and metadata.
+
+ - ### **Director (create_region_sample_locations_director.py)**
+
+ - **Purpose**: Orchestrates sample location processing for all IDW regions, handling data staging and parallel worker coordination.
+
+ - **Key Functions:**
+
+ 1. **`director(project_gdb, Sequential, table_names)`** — Main orchestration function:
+ - **Pre-processing** (sequential):
+ - Clears scratch folder
+ - Creates region-specific GDBs under Scratch folder
+ - Copies `Datasets` table and `{table_name}_Region` feature class to each region GDB
+
+ - **Sequential or parallel processing**:
+ - **Sequential mode**: Calls `worker()` sequentially for each region
+ - **Parallel mode**: Uses `multiprocessing.Pool` (processes = CPU count - 2, maxtasksperchild=1); monitors job completion with status polling every ~7.5×processes seconds
+
+ - **Post-processing** (sequential):
+ - Walks scratch folder collecting all generated tables and feature classes
+ - Copies each result to project GDB
+ - Compacts project GDB
+
+ 2. **`script_tool(project_gdb)`** — Entry point with test mode:
+ - Default test mode: Single region (`GMEX_IDW`)
+ - Can run all 15 IDW regions in non-sequential mode
+ - Logs timing and environment info
+
+ - ### **Worker (create_region_sample_locations_worker.py)**
+
+ - **Purpose**: For a single region, imports survey CSV data, transforms and enriches fields, converts to GDB table, then exports as point feature class with spatial reference.
+
+ - **Processing Pipeline (3 Major Phases):**
+
+ #### **Phase 1: CSV Loading & Field Extraction**
+
+ - Detects CSV encoding using `dismap_tools.get_encoding_index_col()` (chardet-based)
+ - Loads CSV with pandas using schema-based data types from `dismap_tools.dTypesCSV()`
+ - Queries `Datasets` table to extract region metadata:
+ - TableName, GeographicArea, DatasetCode, Region, Season, DistributionProjectCode
+
+ - **Column renaming** (22 aliases handled):
+ - `spp` / `spp_sci` → `Species`
+ - `common` / `spp_common` → `CommonName`
+ - `lon` / `Longitude`; `lat` / `Latitude`
+ - `lon_UTM` / `Easting`; `lat_UTM` / `Northing`
+ - `haulid` / `sampleid` → `SampleID`
+ - `wtcpue` → `WTCPUE`
+ - `median_est` / `mean_est` → `MedianEstimate` / `MeanEstimate`
+ - `est5` / `est95` → `Estimate5` / `Estimate95`
+ - `year` → `Year`; `depth_m` → `Depth`
+ - `stratum` → `Stratum`; `transformed` → `MapValue`
+
+ #### **Phase 2: DataFrame Transformation & Enrichment**
+
+ **Insert new columns with derived/default values:**
+
+ - `DatasetCode`: From Datasets table
+ - `Region`: From Datasets table (or default if None)
+ - `Season`: From Datasets table (or empty string if None)
+ - `StdTime`: Calculated from Year (pd.to_datetime with GMT+12 timezone)
+ - `MapValue`: Cube root transformation of WTCPUE (`WTCPUE ^ (1/3)`)
+ - `SpeciesCommonName`: Format `"Species (CommonName)"` where CommonName exists
+ - `CommonNameSpecies`: Format `"CommonName (Species)"` where CommonName exists
+ - `SummaryProduct`: Set to "Yes"
+ - `TransformUnit`: Set to "cuberoot" (documents MapValue transformation)
+ - `CoreSpecies`: Set to "No" (default; could be calculated later)
+
+ **Handle null/missing values:**
+
+ - Replace `NaN` with empty string for string fields (CommonName, DistributionProjectName)
+ - Replace `Inf` and `-Inf` with `NaN` in numeric fields (WTCPUE, coordinates, depth)
+ - Fill numeric nulls appropriately for Latitude, Longitude, Depth, WTCPUE
+
+ - **Reorder columns** using `dismap_tools.table_definitions()` to match schema order
+
+ #### **Phase 3: GDB Table Creation & Feature Conversion**
+ - Convert enriched DataFrame to NumPy structured array using GDB field types (`dismap_tools.dTypesGDB()`)
+ - Create temporary GDB table via `arcpy.da.NumPyArrayToTable()`
+ - Copy rows to output table: `{table_name}` (permanent table in region GDB)
+ - Delete temporary table
+
+ **Convert Table to Feature Class:**
+
+ - Create XY Event Layer from output table using:
+ - X field: `Longitude` (geographic coordinates, WGS84)
+ - Y field: `Latitude` (geographic coordinates, WGS84)
+ - Spatial reference: WGS84 (EPSG:4326)
+ - Geographic transformation: Auto-determined via `dismap_tools.get_transformation()`
+
+ - Export XY Event Layer to feature class: `{table_name}_Sample_Locations`
+ - Add attribute index on: Species, CommonName, SpeciesCommonName, Year
+ - Delete temporary XY Event Layer
+
+ **Metadata & Field Aliasing:**
+
+ - Copy metadata from CSV to output table via `arcpy.metadata.Metadata`
+ - Call `dismap_tools.alter_fields()` to set field aliases and properties for both table and feature class
+ - Synchronize metadata for both
+
+ **Cleanup:**
+
+ - Delete intermediate datasets: `{table_name}_Boundary`, `{table_name}_Region`, `Datasets`
+ - Compact region GDB
+
+ - #### **Key Data Transformations**
+ 1. **MapValue (WTCPUE Cube Root)**: Normalizes highly skewed WTCPUE distribution for visualization
+ 2. **SpeciesCommonName/CommonNameSpecies**: Dual naming conventions for UI display flexibility
+ 3. **StdTime**: Creates ISO 8601 timestamp for temporal queries
+ 4. **Field Reordering**: Enforces consistent column order matching GDB schema
+
+ - #### **Integration Points**
+
+ - **Inputs**:
+ - Survey CSV files from `{project_folder}\CSV_Data\{table_name}.csv`
+ - Region geometry from `{table_name}_Region` (from `create_regions_from_shapefiles_worker`)
+ - Schema definitions from `field_definitions.json` and `table_definitions.json`
+ - Datasets metadata table (populated by `import_datasets_species_filter_csv_data.py`)
+
+ - **Outputs**:
+ - `{table_name}` — GDB table with all survey records and enriched fields
+ - `{table_name}_Sample_Locations` — Point feature class in WGS84 (for web/portal publishing)
+ - Spatial index on (Species, CommonName, Year) for fast queries
+
+ - **Data flow**:
+ ```
+ CSV File → pandas DataFrame (with encoding detection)
+ → Column rename & enrichment (22+ derived fields)
+ → NumPy array conversion (type casting)
+ → GDB table creation
+ → XY Event Layer (geographic coordinates)
+ → Point Feature Class (WGS84)
+ ```
+
+ - #### **Technical Details**
+
+ - **Encoding detection**: chardet-based; handles non-ASCII characters in CommonName fields
+ - **Data type schema**: Schema-driven via JSON; ensures type safety across CSV→DataFrame→NumPy→GDB pipeline
+ - **Coordinate systems**:
+ - CSV source: Latitude/Longitude (WGS84) and Easting/Northing (projected, region-specific)
+ - Output table: Both coordinate systems preserved
+ - Output feature class: WGS84 only (for portal interoperability)
+ - **Transformation chain**: WTCPUE → MapValue (cube root) documented via TransformUnit field for traceability
+ - **Pandas optimizations**: Set display options for logging; use `inplace=False` operations for clarity; delete DataFrame after NumPy conversion to free memory
+
+- ### Create Species Year Image Name Table Director and Worker
+ - This director/worker pair generates a catalog table mapping species/year combinations to image names, which coordinates what rasters should be created during downstream workflows (IDW, richness, mosaics).
+
+ - #### **Director (create_species_year_image_name_table_director.py)**
+
+ - **Purpose**: Orchestrates the creation of `LayerSpeciesYearImageName` metadata tables for all IDW regions, which serve as operational catalogs for subsequent raster generation workflows.
+
+ **Key Functions:**
+
+ 1. **`director(project_gdb, Sequential, table_names)`** — Main orchestration function:
+ - **Pre-processing** (sequential):
+ - Calls `preprocessing()` to stage sample location data and species filter tables for each region
+ - Creates region-specific GDBs under Scratch folder
+
+ - **Sequential or parallel processing**:
+ - **Sequential mode**: Calls `worker()` sequentially for each region
+ - **Parallel mode**: Uses `multiprocessing.Pool` (processes = CPU count - 2, maxtasksperchild=1); monitors job completion with status polling every ~7.5×processes seconds
+
+ - **Post-processing** (sequential):
+ - Walks scratch folder collecting all generated `*_LayerSpeciesYearImageName` tables
+ - Copies each table to project GDB
+ - Replaces None values with empty strings in string fields
+ - Compacts project GDB
+
+ 2. **`script_tool(project_gdb)`** — Entry point with hardcoded batch grouping:
+ - **Production batches** (parallel):
+ - Batch 1: `GMEX_IDW, HI_IDW, WC_ANN_IDW, WC_TRI_IDW` (4 regions, test mode)
+ - Batch 2: `AI_IDW, EBS_IDW, ENBS_IDW, GOA_IDW, NBS_IDW` (5 regions, commented)
+ - Batch 3: `HI_IDW, WC_ANN_IDW, WC_TRI_IDW` (3 regions, commented)
+ - Batch 4: `GMEX_IDW, NEUS_FAL_IDW, NEUS_SPR_IDW` (3 regions, commented)
+ - Batch 5: `NEUS_FAL_IDW, NEUS_SPR_IDW, SEUS_FAL_IDW, SEUS_SPR_IDW, SEUS_SUM_IDW` (5 regions, active)
+ - Logs timing and environment info
+
+ - #### **Worker (create_species_year_image_name_table_worker.py)**
+ - **Purpose**: For a single region, generates a `LayerSpeciesYearImageName` table that catalogs all species/year combinations and assigns standardized image names for raster outputs.
+
+ - **Processing Pipeline (Multi-Stage):**
+
+ #### **Phase 1: Create Base LayerSpeciesYearImageName Table**
+
+ - Creates new empty table: `{table_name}_LayerSpeciesYearImageName`
+ - Calls `dismap_tools.add_fields()` to populate schema (200+ fields from `field_definitions.json`)
+ - Calls `dismap_tools.import_metadata()` to attach metadata templates
+
+ #### **Phase 2: Load Reference Data**
+
+ **a) Datasets Table:**
+ - Queries `Datasets` table for region metadata
+ - Extracts: FilterRegion, FilterSubRegion
+ - Example: `['Aleutian Islands', 'Alaska']`
+
+ **b) Region IDW Table:**
+ - The main sample location table: `{table_name}` (e.g., `AI_IDW`)
+ - Contains all survey records with Species, Year, CommonName, etc.
+ - Logs unique species count
+
+ **c) Species Filter Table:**
+ - Filters to: `FilterSubRegion = '{filter_subregion}' AND DistributionProjectName = 'NMFS/Rutgers IDW Interpolation'`
+ - Builds species_filter dict: `{species: [CommonName, TaxonomicGroup, FilterRegion, FilterSubRegion, ManagementBody, ManagementPlan, DistributionProjectName]}`
+ - Example entry: `'Anoplopoma fimbria': ['Sablefish', 'Perciformes/Cottoidei (sculpins)', 'Alaska', 'Aleutian Islands', 'NPFMC', '...', 'NMFS/Rutgers IDW Interpolation']`
+
+ #### **Phase 3: Generate Base Species/Year Records**
+
+ **Statistical Aggregation:**
+ - Uses `arcpy.analysis.Statistics()` to get unique species/year combinations from sample locations
+ - Case fields: Filters to fields that appear in both LayerSpeciesYearImageName schema and region IDW table
+ - Removes COUNT/FREQUENCY fields to create unique species/year records
+ - Result: `{table_name}_tmp` temporary table with one row per species/year combination
+
+ **Field Addition:**
+ - Adds 5 new fields to `{table_name}_tmp`:
+ - FilterRegion, FilterSubRegion, TaxonomicGroup, ManagementBody, ManagementPlan, DistributionProjectName
+ - Variable, Value, Dimensions, ImageName
+
+ **Row Population with Update Cursor:**
+
+ For each species/year combination:
+ - **Variable**: Species name (parentheses/periods removed) → e.g., `"Anoplopoma fimbria"` becomes `"Anoplopoma_fimbria"`
+ - **Value**: Set to `"Species"` (literal)
+ - **Dimensions**: Set to `"StdTime"` (standard time dimension)
+ - **ImageName**: Formatted as `{table_name}_{variable}_{year}` → e.g., `AI_IDW_Anoplopoma_fimbria_2020`
+ - **FilterRegion/FilterSubRegion**: Looked up from species_filter dict if species found, else empty string
+ - **TaxonomicGroup/ManagementBody/ManagementPlan/DistributionProjectName**: Looked up from species_filter dict
+
+ **Append to Output Table:**
+ - Appends populated records to `layer_species_year_image_name` table
+
+ #### **Phase 4: Generate Species Richness Records (Core)**
+
+ **Statistical Aggregation for Core Species:**
+ - Applies second `arcpy.analysis.Statistics()` to group by non-species fields
+ - Creates `{table_name}_tmp_stats` with unique region/season/dataset records
+ - Adds Variable, Value, Dimensions, ImageName fields
+
+ **Row Population - Core Species Richness:**
+
+ For each region/season combination (only where core species exist):
+ - **Variable**: `"Core Species Richness"`
+ - **Value**: `"Core Species Richness"`
+ - **Dimensions**: `"StdTime"`
+ - **ImageName**: `{table_name}_Core_Species_Richness_{year}`
+ - **CoreSpecies**: Set to `"Yes"` (signals core richness raster)
+ - **FilterRegion/FilterSubRegion**: From region metadata
+
+ **Append Core Records to Output Table**
+
+ #### **Phase 5: Generate Species Richness Records (Total)**
+
+ **Row Population - Total Species Richness:**
+
+ For each region/season combination (all species):
+ - **Variable**: `"Species Richness"` (differs from core)
+ - **Value**: `"Species Richness"`
+ - **Dimensions**: `"StdTime"`
+ - **ImageName**: `{table_name}_Species_Richness_{year}`
+ - **CoreSpecies**: Set to `"No"` (signals all-species richness raster)
+
+ **Append Total Records to Output Table**
+
+ #### **Phase 6: Finalization**
+
+ - Calls `dismap_tools.alter_fields()` to set field aliases and properties
+ - Sets metadata on `layer_species_year_image_name`: synchronizes ArcGIS metadata
+ - Deletes intermediate tables: `{table_name}_tmp`, `{table_name}_tmp_stats`, and optionally temporary dataset tables
+ - Compacts region GDB
+
+ ### **Output Table Structure**
+
+ `{table_name}_LayerSpeciesYearImageName` contains records with fields:
+
+ | Field Name | Purpose | Example |
+ |---|---|---|
+ | DatasetCode | Region code | `AI` |
+ | Region | Region name | `Aleutian Islands` |
+ | Species | Scientific species name | `Anoplopoma fimbria` |
+ | CommonName | Common species name | `Sablefish` |
+ | SpeciesCommonName | Formatted combo | `Anoplopoma fimbria (Sablefish)` |
+ | Year | Survey year | `2020` |
+ | StdTime | Standard time (derived) | (calculated) |
+ | Variable | Raster type | `Anoplopoma_fimbria` or `Species Richness` |
+ | Value | Raster value description | `Species` or `Species Richness` |
+ | Dimensions | Analysis dimension | `StdTime` |
+ | ImageName | Output raster name | `AI_IDW_Anoplopoma_fimbria_2020` or `AI_IDW_Species_Richness_2020` |
+ | CoreSpecies | Flag for richness rasters | `Yes` (core) / `No` (total) / blank (species) |
+ | FilterRegion/FilterSubRegion | Geographic filters | From species_filter lookup |
+ | TaxonomicGroup | Taxonomic class | From species_filter lookup |
+ | ManagementBody | Fishery management org | From species_filter lookup |
+ | ManagementPlan | Management plan reference | From species_filter lookup |
+ | DistributionProjectName | Distribution project | `NMFS/Rutgers IDW Interpolation` |
+
+ #### **Key Data Operations**
+
+ 1. **Statistical Deduplication:**
+ - Uses `arcpy.analysis.Statistics()` twice:
+ - First: Identify unique species/year combinations from sample locations
+ - Second: Identify unique region/season combinations for richness catalogs
+
+ 2. **Multi-Stage Table Generation:**
+ - Three types of rows generated in sequence:
+ 1. Species-specific rasters (one per species/year)
+ 2. Core species richness rasters (aggregate)
+ 3. Total species richness rasters (aggregate)
+
+ 3. **Dictionary Lookup for Metadata:**
+ - Species filter dict enables efficient lookups during cursor operations
+ - Prevents repeated database queries (performance optimization)
+
+ 4. **Name Standardization:**
+ - Removes special characters (parentheses, periods) from species names for file naming
+ - Creates consistent `ImageName` format: `{region}_{variable}_{year}`
+
+ #### **Integration Points**
+
+ - **Inputs**:
+ - `{table_name}` (sample locations table from `create_region_sample_locations_worker`)
+ - `Species_Filter` table (metadata table with species/management mappings)
+ - `Datasets` table (region metadata: FilterRegion, FilterSubRegion)
+
+ - **Outputs**:
+ - `{table_name}_LayerSpeciesYearImageName` table — Catalog of all rasters to generate
+ - Contains 3 record types: species-specific (many), core richness (few), total richness (few)
+
+ - **Data flow**:
+ ```
+ Sample Locations (Species/Year data)
+ → Statistics → unique combinations
+ ↓
+ Species Filter (Taxonomy/Management)
+ ↓
+ Update Cursor (populate Variable/ImageName/Metadata)
+ ↓
+ LayerSpeciesYearImageName Table (operational catalog)
+
+ Used downstream by:
+ - create_rasters_worker.py (species rasters per ImageName)
+ - create_species_richness_rasters_worker.py (richness rasters per ImageName)
+ - create_mosaics_worker.py (mosaic generation)
+ ```
+
+ #### **Technical Architecture**
+
+ - **Deduplication Strategy**: Uses ArcGIS `Statistics` tool rather than manual cursor loops for efficient unique-value extraction
+ - **Temporary Tables**: Uses `_tmp` suffix for intermediate working tables; cleaned up post-processing
+ - **Cursor Operations**: Three separate update cursor passes to populate different record types (species, core richness, total richness)
+ - **Metadata Lookup**: Builds in-memory dict from Species_Filter to avoid repeated database queries
+ - **Name Formatting**: Removes special characters from species names to ensure valid raster filenames
+ - **Batch Parallelism**: Director processes multiple regions in parallel batches; each worker independently generates catalog
+
+- ### Create Rasters Director and Worker
+ - This director/worker pair creates interpolated rasters from sample location data using Inverse Distance Weighting (IDW) geostatistical analysis.
+
+ - #### **Director (create_rasters_director.py)**
+
+ - **Purpose**: Orchestrates raster creation for all IDW regions, generating spatially-interpolated surfaces from survey data with parallel processing support.
+
+ - **Key Functions:**
+
+ 1. **`director(project_gdb, Sequential, table_names)`** — Main orchestration function:
+ - **Pre-processing** (sequential):
+ - Calls `preprocessing()` to stage sample location data for each region into separate GDBs
+ - Creates region-specific workspaces under Scratch folder
+
+ - **Sequential or parallel processing**:
+ - **Sequential mode**: Calls `worker()` sequentially for each region
+ - **Parallel mode**: Uses `multiprocessing.Pool` (processes = CPU count - 2, maxtasksperchild=1); monitors job completion with status polling every ~7.5×processes seconds
+
+ - **Post-processing** (sequential):
+ - Compacts project GDB only (no explicit raster copying—results already in region GDBs)
+
+ 2. **`script_tool(project_gdb)`** — Entry point with hardcoded batch grouping:
+ - **Production batches** (parallel):
+ - Batch 1: `AI_IDW` (single region, default test)
+ - Batch 2: `EBS_IDW, ENBS_IDW, GMEX_IDW, GOA_IDW, NBS_IDW` (5 regions)
+ - Batch 3: `HI_IDW` (single region)
+ - Batch 4: `WC_ANN_IDW, WC_TRI_IDW` (2 regions)
+ - Batch 5: `SEUS_FAL_IDW, SEUS_SPR_IDW, SEUS_SUM_IDW` (3 regions)
+ - Batch 6: `NEUS_FAL_IDW, NEUS_SPR_IDW` (2 regions, currently active in code)
+ - Logs timing and environment info
+
+ - #### **Worker (create_rasters_worker.py)**
+ - **Purpose**: For a single region, generates IDW interpolated rasters for each species/year combination using survey sample locations.
+
+ - **Processing Pipeline:**
+
+ - #### **Phase 1: Data Staging (`preprocessing()` function)**
+
+ Pre-processes data for all regions into isolated region GDBs:
+
+ - Clears scratch folder if `clear_folder=True`
+ - For each region (e.g., `AI_IDW`), creates:
+ - Region-specific GDB: `{scratch_folder}\{table_name}.gdb`
+ - Region scratch workspace: `{scratch_folder}\{table_name}\scratch.gdb`
+
+ - Copies data to region GDB:
+ - `Datasets` table (metadata for region parameters: cell size, geographic area)
+ - `{table_name}_LayerSpeciesYearImageName` table (catalog of rasters to generate)
+ - `{table_name}_Sample_Locations` feature class (survey data points filtered for IDW species distribution)
+ - `{table_name}_Raster_Mask` raster (spatial template: extent, cell size, coordinate system)
+
+ - Extracts FilterRegion and FilterSubRegion values for later reference
+
+ - #### **Phase 2: Raster Generation (`worker()` function)**
+
+ For each species/year combination, executes multi-step IDW interpolation:
+
+ **1. Prepare Output Raster Catalog:**
+ - Queries `{table_name}_LayerSpeciesYearImageName` table for all rasters to generate
+ - Extracts 4 fields: ImageName, Variable, Species, Year
+ - Filters to exclude "Species Richness" rasters (handled by separate workflow)
+ - Builds output_rasters dict: `{image_name: [image_name, variable, species, year, output_raster_path]}`
+ - Creates folder structure: `{project_folder}\Images\{table_name}\{variable}\{image_name}.tif`
+
+ **2. Prepare Sample Location Feature Layer:**
+ - Creates feature layer from `{table_name}_Sample_Locations` for attribute selection
+ - Filters for `DistributionProjectName = 'NMFS/Rutgers IDW Interpolation'` (IDW-compatible data)
+ - If `SummaryProduct == "Yes"`: Adds `YearWeights` field (short integer, alias "Year Weights")
+
+ **3. For Each Raster to Generate:**
+
+ **a) Select Species and Year Data:**
+ - Clears previous selection
+ - Selects sample points where `Species = '{species}' AND Year = {year}`
+ - Logs count of selected records
+
+ **b) Set IDW Search Neighborhood:**
+ - Extracts cell size from region metadata
+ - Calculates search ellipse:
+ - Major axis: `cell_size × 1000` (converts kilometers to meters)
+ - Minor axis: `cell_size × 1000`
+ - Angle: 0 (no rotation)
+ - Max neighbors: 15
+ - Min neighbors: 10
+ - Sector type: ONE_SECTOR
+ - Uses `arcpy.SearchNeighborhoodStandard()` for neighbor selection
+
+ **c) Weight Years for Temporal Smoothing:**
+ - Selects weighted years: `Year >= (year-2) AND Year <= (year+2)` (±2 years from target year)
+ - Calculates YearWeights using: `YearWeights = 3 - abs(year_target - year_sample)`
+ - Target year: weight = 3
+ - ±1 year: weight = 2
+ - ±2 years: weight = 1
+ - Logs count of weighted records
+
+ **d) Execute IDW Interpolation:**
+ - Checks out GeoStats extension: `arcpy.CheckOutExtension("GeoStats")`
+ - Sets environment parameters:
+ - Extent, mask, snapRaster from `region_raster_mask` (spatial alignment)
+ - Cell size from region metadata
+ - Output coordinate system from raster mask
+ - Calls `arcpy.ga.IDW()` with parameters:
+ - Input features: `sample_locations_path_layer` (selected points)
+ - Z field: `MapValue` (cube-root transformed WTCPUE from sample locations)
+ - Output raster: `memory\{output_raster}` (temporary in-memory raster)
+ - Cell size: region-specific
+ - Power: 2 (standard IDW power for distance weighting)
+ - Search neighborhood: calculated ellipse with temporal weights
+ - Weight field: `YearWeights` (applies temporal smoothing)
+
+ **e) Reverse MapValue Transformation:**
+ - Executes `arcpy.sa.Power(tmp_raster, 3)` to cube the interpolated values
+ - Reverses the cube-root transformation: `MapValue^3 = WTCPUE`
+ - Saves power-transformed raster to output TIF file path
+
+ **f) Build Pyramids:**
+ - Constructs pyramid levels (-1 = all levels)
+ - Resampling: BILINEAR
+ - Compression: DEFAULT with 75% quality
+ - Skip existing: OVERWRITE
+
+ **4. Metadata & Cleanup:**
+ - Sets raster metadata title: image_name with underscores replaced by spaces
+ - Synchronizes metadata: `tif_md.synchronize("ALWAYS")`
+ - Deletes intermediate in-memory raster
+ - Resets YearWeights field to None
+ - Clears feature layer selection
+
+ **5. Final Cleanup:**
+ - Deletes sample location feature layer
+ - Deletes intermediate datasets: `Datasets`, `Raster_Mask`, `LayerSpeciesYearImageName`
+ - Compacts region GDB
+
+ - #### **Key Data Transformations**
+
+ 1. **MapValue → WTCPUE (Cube Root Reversal):**
+ - Input rasters: Cube-root transformed WTCPUE (MapValue) from sample locations
+ - IDW interpolates across landscape: `MapValue = WTCPUE^(1/3)`
+ - Output: `interpolated_MapValue^3 = WTCPUE` (restores original units)
+ - Justification: Cube root normalizes highly skewed WTCPUE distribution for interpolation
+
+ 2. **Temporal Weighting:**
+ - For target year Y: selects sample data from years Y-2 to Y+2
+ - Weight formula: `3 - |Y_target - Y_sample|`
+ - Effect: Center year has 3× influence; ±2 years have 1× influence
+ - Reduces temporal noise while maintaining year-specific estimates
+
+ 3. **Search Neighborhood:**
+ - Ellipse major/minor axes = `cell_size × 1000` meters
+ - Ensures interpolation uses local sample points (not global)
+ - Min/Max neighbors: 10–15 points per cell (balance: detail vs. stability)
+
+ - #### **Integration Points**
+
+ - **Inputs**:
+ - `{table_name}_Sample_Locations` feature class (from `create_region_sample_locations_worker`)
+ - `{table_name}_Raster_Mask` raster (from `create_region_fishnets_worker`)
+ - `{table_name}_LayerSpeciesYearImageName` table (mapping of species/year to raster outputs)
+ - `Datasets` table (region cell size, metadata)
+ - MapValue field (cube-root WTCPUE from sample enrichment)
+
+ - **Outputs**:
+ - Raster TIFs saved to: `{project_folder}\Images\{table_name}\{variable}\{image_name}.tif`
+ - One interpolated surface per species/year/region combination
+ - Rasters include pyramids and spatial reference from region mask
+
+ - **Data flow**:
+ ```
+ Sample Locations (points with MapValue)
+ → Feature selection by species/year
+ → IDW interpolation (with YearWeights neighbor weighting)
+ → Power transform (cube back to WTCPUE)
+ → TIF output with pyramids
+ ```
+
+ - #### **Technical Details**
+
+ - **Geostatistical Analyst Requirements**: Raster creation depends on ArcGIS GeoStats extension checkout/check-in
+ - **Memory Management**: Uses in-memory raster (`memory\{name}`) for temporary IDW output before power transform and final TIF save
+ - **Raster Masking**: Extent, mask, and snapRaster from `region_raster_mask` ensure outputs conform to fishnet grid alignment
+ - **Year Weights Implementation**: CalculateField expressions: `3 - (abs({year} - !Year!))` dynamically applied to selected records
+ - **Coordinate System**: Output rasters inherit spatial reference from `region_raster_mask` (region-specific projection)
+ - **Parallel Batching**: Director splits 15 IDW regions into 6 sequential batches (sizes 1–5 regions) to manage CPU load across multiple `director()` calls
+
+- ### Create Species Richness Rasters Director and Worker
+
+This director/worker pair generates species richness rasters by counting the presence/absence of species across interpolated surfaces for each year and region.
+
+### **Director (create_species_richness_rasters_director.py)**
+
+**Purpose**: Orchestrates species richness raster generation for all IDW regions, aggregating species count rasters from existing interpolated species distributions.
+
+**Key Functions:**
+
+1. **`director(project_gdb, Sequential, table_names)`** — Main orchestration function:
+ - **Pre-processing** (sequential):
+ - Calls `preprocessing()` to stage required data for each region into separate GDBs
+ - Creates region-specific workspaces under Scratch folder
+
+ - **Sequential or parallel processing**:
+ - **Sequential mode**: Calls `worker()` sequentially for each region
+ - **Parallel mode**: Uses `multiprocessing.Pool` (processes = CPU count - 2, maxtasksperchild=1); monitors job completion with status polling every ~7.5×processes seconds
+
+ - **Post-processing** (sequential):
+ - Compacts project GDB
+
+2. **`script_tool(project_gdb)`** — Entry point with hardcoded batch grouping:
+ - **Production batches** (parallel):
+ - Batch 1: `AI_IDW, EBS_IDW, GOA_IDW` (3 regions, currently in test mode)
+ - Batch 2: `ENBS_IDW, GOA_IDW, NBS_IDW` (3 regions, alt batch)
+ - Batch 3: `SEUS_FAL_IDW, SEUS_SPR_IDW, SEUS_SUM_IDW` (3 regions)
+ - Batch 4: `HI_IDW, WC_ANN_IDW, WC_TRI_IDW` (3 regions)
+ - Batch 5: `GMEX_IDW, NEUS_FAL_IDW, NEUS_SPR_IDW` (3 regions)
+ - Currently in test mode with single region: `SEUS_FAL_IDW`
+ - Logs timing and environment info
+
+### **Worker (create_species_richness_rasters_worker.py)**
+
+**Purpose**: For a single region, generates species richness rasters (total and core species) by counting presence/absence across all interpolated species distributions for each year.
+
+**Processing Pipeline:**
+
+#### **Phase 1: Data Staging (`preprocessing()` function)**
+
+Pre-processes data for all regions into isolated region GDBs:
+
+- Clears scratch folder if `clear_folder=True`
+- For each region (e.g., `AI_IDW`), creates:
+ - Region-specific GDB: `{scratch_folder}\{table_name}.gdb`
+ - Region scratch workspace: `{scratch_folder}\{table_name}\scratch.gdb`
+
+- Copies data to region GDB:
+ - `Datasets` table (metadata for region parameters: cell size, DatasetCode)
+ - `{table_name}_LayerSpeciesYearImageName` table (catalog of rasters; contains metadata about which species/years have interpolated surfaces)
+ - `{table_name}_Raster_Mask` raster (spatial template: extent, cell size, coordinate system)
+
+- Creates these tables by:
+ - Filtering Datasets to matching region
+ - Copying LayerSpeciesYearImageName table
+ - Copying Raster_Mask raster
+
+#### **Phase 2: Richness Raster Generation (`worker()` function)**
+
+For each year, generates two types of species richness rasters:
+
+**1. Prepare Raster Catalog:**
+- Queries `{table_name}_LayerSpeciesYearImageName` table for all interpolated surfaces
+- Extracts 5 fields: DatasetCode, CoreSpecies, Year, Variable, ImageName
+- Filters for: `Variable NOT IN ('Core Species Richness', 'Species Richness')` (excludes pre-computed richness layers)
+- Further filters: `DatasetCode = '{datasetcode}'` (single dataset per region)
+- Builds `input_rasters` dict: `{image_name.tif: [variable, corespecies, year, path_to_tif]}`
+- Result: Catalog of all species/year raster TIF files available
+
+**2. Prepare Output Paths:**
+- Total richness path: `{project_folder}\Images\{table_name}\_Species Richness\{table_name}_Species_Richness_{year}.tif`
+- Core richness path: `{project_folder}\Images\{table_name}\_Core Species Richness\{table_name}_Core_Species_Richness_{year}.tif`
+- Scratch paths for temporary working space
+- Creates directories if they don't exist
+
+**3. Extract Raster Dimensions and Extent:**
+- Gets row count and column count from `{table_name}_Raster_Mask` using `GetRasterProperties()`
+- Extracts lower-left corner point from mask extent (used as spatial reference for output raster)
+- Used to create zero-initialized NumPy arrays
+
+**4. Generate Total Species Richness Rasters:**
+
+For each unique year in the input rasters:
+
+ **a) Initialize richness array:**
+ - Creates zero-filled NumPy array: `shape=(rowCount, columnCount), dtype='float32'`
+ - Will accumulate presence counts per raster cell
+
+ **b) Process all species for the year:**
+ - Filters input_rasters to only records where `Year == {target_year}`
+ - For each species/year combination:
+ - Loads raster as NumPy array: `arcpy.RasterToNumPyArray(_in_raster, nodata_to_value=np.nan)`
+ - Replaces negative values with NaN (removes invalid data)
+ - Converts positive values (>0.0) to 1.0 (binary presence indicator)
+ - Adds species binary array to accumulating richness array: `richnessArray += rasterArray`
+
+ **c) Convert array to raster:**
+ - Casts array as float32 for consistency
+ - Converts NumPy array to raster using `arcpy.NumPyArrayToRaster()` with:
+ - Lower-left corner point (spatial reference)
+ - Cell size (from region metadata)
+ - NoData value: -3.40282346639e+38 (minimum float32)
+ - Saves raster to TIF file path
+ - Calculates statistics: `arcpy.management.CalculateStatistics()`
+
+ **d) Set metadata:**
+ - Sets raster title: image filename with underscores replaced by spaces
+ - Synchronizes metadata: `raster_md.synchronize("ALWAYS")`
+
+**5. Generate Core Species Richness Rasters:**
+
+Similar process as total richness, but filters to only core species (`CoreSpecies == "Yes"`):
+
+ - Extracts unique years where core species data exists
+ - For each core-species year:
+ - Filters input_rasters to `Year == {year} AND CoreSpecies == "Yes"`
+ - Accumulates presence/absence binary counts per cell
+ - Saves to core richness output path
+ - Sets metadata
+
+**Output Result:**
+
+For each region and year:
+- `{table_name}_Species_Richness_{year}.tif` — Count of all species present in each fishnet cell
+- `{table_name}_Core_Species_Richness_{year}.tif` — Count of core species only
+
+**Technical Details:**
+
+- **NumPy Operations**: Uses NumPy array operations for efficient raster counting (all species processed in-memory)
+- **Binary Conversion**: All input rasters (interpolated surfaces with WTCPUE values) converted to 1.0 (presence) or NaN (absence/nodata)
+- **Accumulation**: `np.add(richnessArray, rasterArray)` counts species presence per cell
+- **Output Type**: float32 for compatibility with ArcGIS raster formats
+- **NoData Handling**: NoData pixels from input treated as NaN (not counted toward richness)
+- **Coordinate System**: Output rasters inherit spatial reference from `region_raster_mask`
+
+#### **Phase 3: Cleanup**
+
+- Deletes intermediate datasets: `Datasets`, `LayerSpeciesYearImageName`, `Raster_Mask`
+- Compacts region GDB
+- Logs completion message
+
+### **Key Data Transformations**
+
+1. **Interpolated Surface → Binary Presence:**
+ - Input: WTCPUE values (cube-root transformed from IDW rasters)
+ - Conversion: Any value > 0.0 becomes 1.0; negative/NaN values remain NaN
+ - Effect: Creates presence/absence indicator for each species
+
+2. **Raster Accumulation:**
+ - Sum across all species rasters for a given year
+ - Result: Each fishnet cell contains count of species observed (0 to N species)
+ - Semantics: Higher values = greater species diversity
+
+3. **Core Species Filtering:**
+ - Subset of species marked with `CoreSpecies = "Yes"` in metadata
+ - Richness rasters generated separately: total vs. core
+ - Enables comparison: all species diversity vs. managed/target species diversity
+
+### **Integration Points**
+
+- **Inputs**:
+ - Interpolated species rasters from `create_rasters_worker` outputs (saved to `{project_folder}\Images\{table_name}\{variable}\{image_name}.tif`)
+ - `{table_name}_LayerSpeciesYearImageName` table (maps species/year to raster files; from data catalog)
+ - `{table_name}_Raster_Mask` raster (spatial template)
+ - `Datasets` table (region metadata: DatasetCode, CellSize)
+
+- **Outputs**:
+ - `_Species Richness\{table_name}_Species_Richness_{year}.tif` — Total richness rasters
+ - `_Core Species Richness\{table_name}_Core_Species_Richness_{year}.tif` — Core richness rasters
+ - Rasters saved to: `{project_folder}\Images\{table_name}\`
+ - One raster per year per richness type per region
+
+- **Data flow**:
+ ```
+ Interpolated Species Rasters (TIF files)
+ → Binary presence/absence conversion (NumPy)
+ → Accumulation per cell across species (NumPy.add)
+ → NumPy array → Raster conversion
+ → Richness TIF output with statistics & metadata
+ ```
+
+ - #### **Technical Architecture**
+
+ - **Array-based computation**: All raster operations performed via NumPy for efficiency (avoids pixel-by-pixel cursor operations)
+ - **Lazy evaluation**: Rasters loaded only when needed (one species/year at a time)
+ - **Memory efficiency**: Processes complete regions before cleanup; avoids storing all input rasters simultaneously
+ - **Coordinate system consistency**: Output rasters tied to `region_raster_mask` extent/resolution/SR
+ - **Batch parallelism**: Director splits 15 IDW regions into 5 sequential batches (3 regions each) to manage CPU load across multiple `director()` calls
+ - **Dual richness generation**: Single worker pass generates both total and core richness by filtering CoreSpecies flag during accumulation
+
+
+- ### Create Region Mosaics Director and Worker
+ - This director/worker pair creates mosaic datasets and Cloud Raster Format (CRF) files from interpolated species rasters, enabling efficient multi-band image services for portal publishing.
+
+ ### **Director (create_mosaics_director.py)**
+
+ **Purpose**: Orchestrates mosaic dataset creation for all IDW regions, aggregating species rasters into multi-dimensional mosaic structures for web services.
+
+ **Key Functions:**
+
+ 1. **`director(project_gdb, Sequential, table_names)`** — Main orchestration function:
+ - **Pre-processing** (sequential):
+ - Calls `preprocessing()` to stage data for each region into separate GDBs
+ - Creates region-specific workspaces under Scratch folder
+
+ - **Sequential or parallel processing**:
+ - **Sequential mode**: Calls `worker()` sequentially for each region
+ - **Parallel mode**: Uses `multiprocessing.Pool` (processes = CPU count - 2, maxtasksperchild=1); monitors job completion with status polling every ~7.5×processes seconds
+
+ - **Post-processing** (sequential):
+ - Walks scratch folder collecting all mosaic datasets and `.crf` files
+ - Copies mosaic datasets to project GDB
+ - Copies `.crf` files to `CRFs` folder
+ - Calls `dismap_tools.import_metadata()` to attach metadata
+ - Deletes source datasets from scratch
+ - Compacts project GDB
+
+ 2. **`script_tool(project_gdb)`** — Entry point with test mode:
+ - Currently in test mode: `Sequential=True, table_names=["SEUS_FAL_IDW"]`
+ - Alternative production batches commented (non-sequential options available)
+ - Logs timing and environment info
+
+ ### **Worker (create_mosaics_worker.py)**
+
+ **Purpose**: For a single region, creates a mosaic dataset by aggregating all interpolated species rasters, then exports to Cloud Raster Format (CRF) for efficient storage and web service delivery.
+
+ **Processing Pipeline:**
+
+ #### **Phase 1: Prepare Output Paths & Metadata**
+
+ - Queries `Datasets` table for region metadata:
+ - Extracts: TableName, DatasetCode, CellSize, MosaicName, MosaicTitle
+
+ - Sets output coordinate system from `{table_name}_Raster_Mask` spatial reference
+
+ #### **Phase 2: Build Input Raster List**
+
+ - Queries `{table_name}_LayerSpeciesYearImageName` table for all rasters to include
+ - Filters by: `DatasetCode = '{datasetcode}'` (region-specific data)
+ - Extracts: Variable, ImageName
+ - Constructs input raster paths:
+ - Path format: `{project_folder}\Images\{table_name}\{variable}\{image_name}.tif`
+ - Special handling: Prepends underscore to "Species Richness" variable for folder naming
+ - Validates: Each raster file must exist; logs errors for missing files
+ - Result: List of input_raster_paths for mosaic ingestion
+
+ #### **Phase 3: Create Mosaic Dataset**
+
+ - Calls `arcpy.management.CreateMosaicDataset()`:
+ - Workspace: region GDB
+ - Name: `{mosaic_name}` (e.g., `AI_IDW_Mosaic`)
+ - Coordinate system: From region raster mask (region-specific projection)
+ - Pixel type: 32-bit float (matches interpolated raster values)
+ - One band (single-band species/richness rasters)
+
+ #### **Phase 4: Load Rasters into Mosaic**
+
+ - Calls `arcpy.management.AddRastersToMosaicDataset()` with parameters:
+ - Raster type: "Raster Dataset" (add pre-existing TIF rasters)
+ - Input path: List of all input_raster_paths from Phase 2
+ - **Cell size handling**:
+ - `update_cellsize_ranges = "UPDATE_CELL_SIZES"` — Automatically determine cell size ranges
+ - **Boundary handling**:
+ - `update_boundary = "UPDATE_BOUNDARY"` — Update mosaic extent from input rasters
+ - **Pyramid/statistics**:
+ - `build_pyramids = "NO_PYRAMIDS"` — Skip pyramid building for now
+ - `calculate_statistics = "NO_STATISTICS"` — Skip statistics calculation during load
+ - `estimate_statistics = "NO_STATISTICS"` — Don't estimate statistics
+ - **Duplicate handling**:
+ - `duplicate_items_action = "EXCLUDE_DUPLICATES"` — Skip duplicate rasters
+ - **Spatial reference**:
+ - `force_spatial_reference = "FORCE_SPATIAL_REFERENCE"` — Force region-specific SR
+ - Minimum dimension: 1500 (cells) — Only include larger rasters
+
+ #### **Phase 5: Join Metadata Attributes**
+
+ - Calls `arcpy.management.JoinField()`:
+ - Join on: Mosaic catalog Name field ← LayerSpeciesYearImageName ImageName field
+ - Joins fields:
+ - DatasetCode, Region, Season, Species, CommonName, SpeciesCommonName
+ - CoreSpecies, Year, StdTime, Variable, Value, Dimensions
+ - Result: Mosaic catalog rows enriched with species/year/metadata attributes
+
+ #### **Phase 6: Create Attribute Indexes**
+
+ - Removes existing index if present: `{table_name}_MosaicSpeciesIndex`
+ - Creates new non-unique index on: Species, CommonName, SpeciesCommonName, Year
+ - Improves query performance for species-based filtering in web services
+
+ #### **Phase 7: Calculate Statistics**
+
+ - Calls `arcpy.management.CalculateStatistics()` with:
+ - Skip existing: False (recalculate)
+ - x_skip_factor / y_skip_factor: 1 (use all data)
+
+ #### **Phase 8: Configure Mosaic Properties**
+
+ - Calls `arcpy.management.SetMosaicDatasetProperties()` with detailed configuration:
+ - **Image sizing**:
+ - Maximum image size: 4100×15000 pixels (supports large composite images)
+ - **Compression**:
+ - Allowed: LZ77, None
+ - Default: LZ77
+ - JPEG quality: 75%
+ - LERC tolerance: 0.01
+ - **Resampling/display**:
+ - Resampling: BILINEAR
+ - Clip to footprints: NOT_CLIP
+ - Clip to boundary: CLIP (constrain to mosaic extent)
+ - Blend width: 10 pixels (feather edges)
+ - Viewpoint: 600×300 (north corner bias)
+ - **Mosaic operations**:
+ - Method: FIRST (display first raster when overlapping)
+ - Max items per mosaic: 50 (limit composite slices)
+ - Cell size tolerance: 0.8 (80% match required)
+ - Cell size: `{cell_size} {cell_size}` (from region metadata)
+ - **Metadata**:
+ - Level: FULL (include all metadata)
+ - Transmission fields: All mosaic catalog fields
+ - **Temporal dimension**:
+ - Time enabled: YES
+ - Start/end time field: StdTime (same field for point-in-time data)
+ - Time format: "YYYY" (year-only dimension)
+ - Time interval: 1 year
+ - Time interval units: Years
+ - **Service capabilities**:
+ - Max download items: 20
+ - Max records returned: 1000
+ - Data source type: GENERIC
+ - Minimum pixel contribution: 1 (include all valid pixels)
+
+ #### **Phase 9: Analyze Mosaic Dataset**
+
+ - Calls `arcpy.management.AnalyzeMosaicDataset()` with checker keywords:
+ - FOOTPRINT, FUNCTION, RASTER, PATHS, SOURCE_VALIDITY, STALE
+ - PYRAMIDS, STATISTICS, PERFORMANCE, INFORMATION
+ - Validates: All rasters are accessible, pyramids/statistics present, performance optimal
+
+ #### **Phase 10: Build Multidimensional Information**
+
+ - Calls `arcpy.md.BuildMultidimensionalInfo()`:
+ - Variable field: "Variable" (species name or "Species Richness")
+ - Dimension fields: StdTime (Time Step, Year)
+ - Enables time-based slicing: Web services can query by year
+ - Deletes existing multidimensional info before rebuilding
+
+ #### **Phase 11: Export to Cloud Raster Format (CRF)**
+
+ - Calls `arcpy.management.CopyRaster()`:
+ - Source: Mosaic dataset (in-memory processed)
+ - Output: `{scratch_folder}\{table_name}\{mosaic_name}.crf`
+ - Format: CRF (Cloud Raster Format — compressed, cloud-optimized)
+ - Pixel type: 32-bit float (maintains scientific precision)
+ - NoData value: -3.40282e+38 (minimum float32 sentinel)
+ - Process as multidimensional: ALL_SLICES (export all time steps)
+ - No transpose (keep dimension order)
+
+ - Calls `arcpy.management.CalculateStatistics()` on CRF output
+
+ #### **Phase 12: Cleanup**
+
+ - Deletes intermediate tables: Datasets, LayerSpeciesYearImageName, Raster_Mask
+ - Compacts region GDB
+
+ ### **Key Data Operations**
+
+ 1. **Multi-Dimensional Raster Assembly:**
+ - Combines many 2D rasters (species/year) into 4D mosaic (X, Y, Species, Time)
+ - Dimension fields enable web service queries: "Get richness for 2020"
+
+ 2. **Metadata Enrichment via Join:**
+ - Links mosaic catalog (N rows = N input rasters) to species/year metadata
+ - Enables filtering/sorting by taxonomy, management body, temporal attributes
+
+ 3. **Temporal Dimension:**
+ - StdTime field as single timestamp per year (point-in-time data)
+ - Time format "YYYY" enables time-series visualization in web services
+
+ 4. **Cloud Raster Format Export:**
+ - CRF enables efficient tiling, caching, and pyramid generation
+ - Supports efficient web delivery without additional processing
+
+ ### **Integration Points**
+
+ - **Inputs**:
+ - Interpolated species rasters from `create_rasters_worker` (TIF files)
+ - Species richness rasters from `create_species_richness_rasters_worker` (TIF files)
+ - `{table_name}_LayerSpeciesYearImageName` table (raster metadata catalog)
+ - `Datasets` table (region metadata: CellSize, MosaicName)
+ - `{table_name}_Raster_Mask` (spatial template: extent, resolution, SR)
+
+ - **Outputs**:
+ - `{table_name}_Mosaic` mosaic dataset in project GDB
+ - `{table_name}.crf` cloud raster format file in `CRFs` folder
+ - Mosaic indexed on: Species, CommonName, Year (for web service queries)
+ - Multidimensional structure: Variable (species) × Time (year)
+
+ - **Data flow**:
+ ```
+ Interpolated Species/Richness TIFs
+ → AddRastersToMosaicDataset → Build Mosaic Catalog
+ ↓
+ LayerSpeciesYearImageName Table (Metadata)
+ → JoinField → Enrich Catalog
+ ↓
+ SetMosaicDatasetProperties (Configure temporal, compression, bounds)
+ → BuildMultidimensionalInfo (Enable time-series)
+ → CopyRaster to CRF (Cloud-optimized export)
+ ↓
+ CRF File (Portal publishing) + Mosaic Dataset (Project GDB)
+ ```
+
+ - #### **Technical Architecture**
+
+ - **Mosaic as aggregation layer**: Combines isolated raster files (TIFs) into unified queryable structure
+ - **Temporal indexing**: StdTime field enables time-series web services without refactoring
+ - **Cloud Raster Format**: Enables efficient caching and pyramid generation for web services
+ - **Multidimensional support**: Allows portal to expose "Variable" (species) and "Year" (time) as separate dimensions
+ - **Metadata join pattern**: Links mosaic records to taxonomy/management attributes for UI filtering
+ - **Batch parallelism**: Director processes multiple regions in parallel; each worker independently creates mosaic
+
+- ### Create Indicators Table Director and Worker
+ - This director/worker pair generates distribution indicators (center of gravity, min/max coordinates, depth statistics) for each species/year combination using spatial statistics derived from interpolated raster surfaces.
+
+ - #### **Director (create_indicators_table_director.py)**
+
+ - **Purpose**: Orchestrates distribution indicator calculation for all IDW regions, aggregating species-specific spatial statistics into region-level indicator tables.
+
+ - **Key Functions:**
+
+ 1. **`director(project_gdb, Sequential, table_names)`** — Main orchestration function:
+ - **Pre-processing** (sequential):
+ - Calls `preprocessing()` to stage raster and metadata data for each region
+ - Creates region-specific GDBs under Scratch folder
+
+ - **Sequential or parallel processing**:
+ - **Sequential mode**: Calls `worker()` sequentially for each region
+ - **Parallel mode**: Uses `multiprocessing.Pool` (processes = CPU count - 2, maxtasksperchild=1); monitors job completion with status polling every ~7.5×processes seconds
+
+ - **Post-processing** (sequential):
+ - Walks scratch folder collecting all generated `*_Indicators` tables and feature classes
+ - Copies each to project GDB
+ - Compacts project GDB
+
+ 2. **`process_indicator_tables(project_gdb)`** — Consolidation function:
+ - Creates master `Indicators` table in project GDB
+ - Adds standardized fields via `dismap_tools.add_fields()`
+ - Appends all region-specific `*_Indicators` tables into master table
+ - Replaces None values with empty strings in string fields
+ - Updates DateCode field using `dismap_tools.date_code()` for standardization
+ - Synchronizes metadata
+
+ 3. **`script_tool(project_gdb)`** — Entry point with test mode:
+ - Currently disabled: Test=False (director calls commented out)
+ - Calls `process_indicator_tables()` to combine all indicator tables into master
+ - Logs timing and environment info
+
+ ### **Worker (create_indicators_table_worker.py)**
+
+ **Purpose**: For a single region, calculates distribution indicators (center of gravity, percentile bounds, offsets, standard errors) for each species/year from biomass rasters.
+
+ **Processing Pipeline:**
+
+ #### **Phase 1: Create Indicators Table & Load Data**
+
+ - Creates empty table: `{table_name}_Indicators`
+ - Calls `dismap_tools.add_fields()` to populate schema (200+ fields from `field_definitions.json`)
+ - Queries `Datasets` table for region metadata:
+ - Extracts: DatasetCode, TableName, CellSize, Region, Season, DateCode, DistributionProjectCode, DistributionProjectName, SummaryProduct
+
+ #### **Phase 2: Set Spatial Environment**
+
+ - Sets environment parameters:
+ - Cell size: From region metadata
+ - Extent, mask, snapRaster: From `{table_name}_Raster_Mask` (spatial alignment)
+ - Prepares raster references:
+ - `{table_name}_Bathymetry` — Depth values per cell (negative values; zero is surface)
+ - `{table_name}_Latitude` — Geographic latitude per cell
+ - `{table_name}_Longitude` — Geographic longitude per cell (0-360 initially, converted to -180 to 180)
+
+ #### **Phase 3: Prepare Raster Catalog**
+
+ - Queries `{table_name}_LayerSpeciesYearImageName` for all species rasters
+ - Filters: `DatasetCode = '{datasetcode}'` AND `NOT Species Richness`
+ - Builds input_rasters nested dict structure: `{variable: {year: [metadata + path]}}`
+ - Validates: Each raster file exists
+
+ #### **Phase 4: Calculate Distribution Indicators (Per Species/Year)**
+
+ For each species and year, calculates 5 dimensions of distribution:
+
+ ##### **A. Biomass Statistics**
+ - Loads biomass raster as NumPy array (from species/year interpolated surface)
+ - Replaces negative/zero values with NaN
+ - Calculates: `sumBiomassArray = np.nansum(biomassArray)`
+ - Logs: Maximum biomass value (>0 indicates valid data)
+
+ ##### **B. Center of Gravity & Percentile Bounds — Latitude**
+
+ - Loads latitude raster array; aligns with biomass (NaN where biomass is NaN)
+ - **Percentile calculation**:
+ - Sorts latitude values by latitude coordinate
+ - Calculates cumulative biomass sum: `cumSum = np.nancumsum(sorted_biomass)`
+ - Converts to quantile: `quantile = cumSum / total_biomass`
+ - Finds 95th and 5th percentile latitude bounds using closest quantile match
+ - Result: `MaximumLatitude` (95th percentile), `MinimumLatitude` (5th percentile)
+
+ - **Center of Gravity**:
+ - Calculates weighted latitude: `weighted = biomass × latitude`
+ - Result: `CenterOfGravityLatitude = Σ(weighted) / Σ(biomass)`
+
+ - **Offset**:
+ - On first year of species: `first_year_offset_latitude = CenterOfGravityLatitude`
+ - For subsequent years: `OffsetLatitude = CenterOfGravityLatitude - first_year_offset_latitude`
+ - Semantics: Tracks migration direction relative to baseline year
+
+ - **Standard Error**:
+ - `variance = np.nanvar(weighted_array)`
+ - `count = np.count_nonzero(~np.isnan(weighted_array))`
+ - Result: `CenterOfGravityLatitudeSE = √variance / √count`
+
+ ##### **C. Center of Gravity & Percentile Bounds — Longitude**
+
+ - **International Date Line Handling**:
+ - Converts longitude from -180/180 to 0/360 range: `lon_360 = np.mod(longitude, 360)`
+ - Applies same percentile/CoG/offset/SE calculations as latitude
+ - Converts back: `lon_180 = np.mod(lon_360 - 180, 360) - 180`
+ - Result: Handles species crossing Pacific antimeridian without wrapping errors
+
+ ##### **D. Center of Gravity & Percentile Bounds — Depth (Bathymetry)**
+
+ - Loads bathymetry raster array (negative values for depth below surface; zero at surface)
+ - Aligns with biomass (NaN where biomass is NaN)
+ - Applies same percentile/CoG/offset/SE calculations as lat/lon
+ - Result: `CenterOfGravityDepth` (weighted mean depth), `MinimumDepth` (5th percentile shallow), `MaximumDepth` (95th percentile deep)
+
+ #### **Phase 5: Row Population**
+
+ For each species/year combination with biomass > 0:
+ - Creates row with 26 fields:
+ - Standard fields: DatasetCode, Region, Season, DateCode, Species, CommonName, CoreSpecies, Year, DistributionProjectName, DistributionProjectCode, SummaryProduct (11 fields)
+ - Latitude indicators: CenterOfGravityLatitude, MinimumLatitude, MaximumLatitude, OffsetLatitude, CenterOfGravityLatitudeSE (5 fields)
+ - Longitude indicators: CenterOfGravityLongitude, MinimumLongitude, MaximumLongitude, OffsetLongitude, CenterOfGravityLongitudeSE (5 fields)
+ - Depth indicators: CenterOfGravityDepth, MinimumDepth, MaximumDepth, OffsetDepth, CenterOfGravityDepthSE (5 fields)
+
+ For species/years with biomass = 0:
+ - All indicator fields set to None (null in GDB)
+
+ #### **Phase 6: Insert Rows into Table**
+
+ - Accumulates all row_values in memory
+ - Uses `InsertCursor` to bulk-insert all rows
+ - Replaces NaN values (self != self check) with None for proper null handling in GDB
+ - Logs final record count: `"{table_name}_Indicators has N records"`
+
+ #### **Phase 7: Cleanup**
+
+ - Deletes intermediate datasets: Datasets, Bathymetry, Latitude, Longitude, Raster_Mask, LayerSpeciesYearImageName
+ - Compacts region GDB
+
+ ### **Key Data Operations**
+
+ 1. **Weighted Center of Gravity**:
+ - Formula: `CoG = Σ(biomass × coordinate) / Σ(biomass)`
+ - Effect: Locates mean position weighted by species abundance
+ - Used for: Tracking population distribution shifts over time
+
+ 2. **Percentile Bounds (5th/95th)**:
+ - Accumulates cumulative biomass sum along sorted coordinate axis
+ - Finds coordinate where 95% of biomass lies beyond (upper bound) and 5% lies beyond (lower bound)
+ - Effect: Robust bounds capturing 90% of population (insensitive to outliers)
+
+ 3. **Offset Tracking**:
+ - Baseline: First year of each species' data
+ - Subsequent years: Difference from baseline CoG
+ - Semantics: Measures northward/southward/deepward migration relative to initial distribution
+
+ 4. **Standard Error Calculation**:
+ - Measures variability in weighted coordinate values
+ - Formula: `SE = √(variance) / √(count)`
+ - Effect: Indicates confidence in CoG estimate (lower SE = more concentrated distribution)
+
+ ### **Integration Points**
+
+ - **Inputs**:
+ - Interpolated species rasters from `create_rasters_worker` (TIF files with WTCPUE)
+ - `{table_name}_LayerSpeciesYearImageName` table (species/year catalog)
+ - `{table_name}_Raster_Mask` (spatial template)
+ - `{table_name}_Bathymetry` (depth raster from `create_region_bathymetry_worker`)
+ - `{table_name}_Latitude` and `{table_name}_Longitude` (coordinate rasters from `create_region_fishnets_worker`)
+ - `Datasets` table (region metadata)
+
+ - **Outputs**:
+ - `{table_name}_Indicators` table — Distribution statistics per species/year
+ - Master `Indicators` table (consolidated from all regions)
+ - One row per species/year combination with valid biomass
+
+ - **Data flow**:
+ ```
+ Interpolated Species Rasters (Biomass)
+ → NumPy array loading
+ ↓
+ Latitude/Longitude/Bathymetry Rasters
+ → Weighted CoG calculation
+ → Percentile bound extraction
+ ↓
+ Offset tracking (baseline year subtraction)
+ → Standard error calculation
+ ↓
+ Indicators Table Row Construction
+ → InsertCursor → Region-specific table
+ ↓
+ Master Indicators Table (all regions appended)
+ ```
+
+ ### **Computational Architecture**
+
+ - **NumPy-based efficiency**: All spatial statistics computed via array operations (no pixel-by-pixel cursors)
+ - **Multi-dimensional calculation**: Single pass over rasters generates 5 spatial dimensions (lat, lon, depth × CoG + bounds)
+ - **Baseline year tracking**: Per-species first year stored to enable offset calculation
+ - **International date line handling**: Special modulo arithmetic prevents wrapping errors at ±180°
+ - **Zero-biomass handling**: Skips computation when `maximumBiomass == 0` (avoids NaN propagation)
+ - **Batch parallelism**: Director processes multiple regions; each worker independently calculates indicators
+
+ ### **Field Output Summary**
+
+ | Field Group | Fields | Calculation |
+ |---|---|---|
+ | **Identifiers** | DatasetCode, Region, Season, Year | From Datasets table & raster metadata |
+ | **Taxonomy** | Species, CommonName, CoreSpecies | From LayerSpeciesYearImageName |
+ | **Spatial Center** | CenterOfGravityLatitude, CenterOfGravityLongitude, CenterOfGravityDepth | Σ(biomass × coordinate) / Σ(biomass) |
+ | **Percentile Bounds** | MinimumLatitude, MaximumLatitude, MinimumLongitude, MaximumLongitude, MinimumDepth, MaximumDepth | 5th/95th percentile of coordinate distribution |
+ | **Migration** | OffsetLatitude, OffsetLongitude, OffsetDepth | CoG(year) - CoG(first_year) |
+ | **Uncertainty** | CenterOfGravityLatitudeSE, CenterOfGravityLongitudeSE, CenterOfGravityDepthSE | √(variance / count) of weighted coordinates |
+
+- ### Publish to Portal Director
+ - Publishes processed datasets to ArcGIS Portal with credentials
+ - #### **Director(publish_to_portal_director.py)**
+
+ **Purpose & Architecture:**
+ This is the final (10th) director-only file in the DisMAP pipeline, responsible for orchestrating ArcGIS Portal publishing workflows. Unlike previous director/worker pairs that execute parallel spatial processing, this director manages sequential feature service creation, service definition draft generation, metadata enrichment, and portal upload operations. It serves as the gateway for publishing all DisMAP-processed datasets (feature classes, tables, indicators, mosaics) as web services to ArcGIS Portal.
+
+ **Core Functions:**
+
+ 1. **`feature_sharing_draft_report(sd_draft="")`** (Lines 17-60)
+ - Parses XML service definition draft files (`.sddraft`)
+ - Extracts all Key-Value property pairs via DOM parsing
+ - Displays configuration details for manual validation before publishing
+ - Used for transparency: shows maxRecordCount, ServiceTitle, and all portal configurations
+ - Error handling: comprehensive exception catching for XML parsing failures
+
+ 2. **`create_feature_class_layers(project_gdb="")`** (Lines 62-420)
+ - **Pre-processing phase**: Workspace setup, scratch GDB creation, ArcPy environment configuration
+ - **Core logic**: Iterates over all publishable datasets (feature classes + tables):
+ - `*Sample_Locations` feature classes
+ - `DisMAP_Regions` feature class
+ - `Indicators`, `Species_Filter`, `DisMAP_Survey_Info`, species persistence tables
+ - **Layer file creation**: For each dataset:
+ - Creates feature layer (MakeFeatureLayer) or table view (MakeTableView)
+ - Saves as `.lyrx` file to `Layers\` folder with dataset title name
+ - Applies metadata copying (title, tags, summary, description, credits, access constraints)
+ - Exports layer to PNG thumbnail (288×192 px, 96 DPI)
+ - **Time enablement**: Detects `StdTime` field and configures temporal layer properties
+ - Sets UTC timezone, calculates temporal extent (start/end dates)
+ - Outputs time range diagnostic information
+ - **Map & metadata lifecycle**:
+ - Creates/overwrites ArcGIS Pro map per dataset
+ - Adds layer file + "Terrain with Labels" basemap
+ - Saves layer file metadata (title, tags, summary) to XML export
+ - Calls `parse_xml_file_format_and_save()` for formatted metadata export
+ - **Post-processing**: Cleanup—deletes temporary maps, saves project
+ - Integration: Depends on upstream `{table_name}` features created by prior directors
+
+ 3. **`create_feature_class_services(project_gdb="")`** (Lines 422-900)
+ - **Purpose**: Primary service publishing function—creates feature service definitions, stages, and uploads to Portal
+ - **Pre-processing**: Same workspace setup as `create_feature_class_layers()`
+ - **Service definition draft generation** (Lines 550-630):
+ - Loads layer files from `Layers\` folder
+ - Calls `map.getWebLayerSharingDraft()` for FEATURE service type on HOSTING_SERVER
+ - Configures draft properties:
+ - `allowExporting = False`
+ - `offline = False`
+ - `overwriteExistingService = True`
+ - `portalFolder = "DisMAP {project_name}"`
+ - Metadata: credits, description, summary, tags, useLimitations from layer metadata
+ - Exports draft to `.sddraft` file in `Publish\` folder
+ - **SD Draft XML modification** (Lines 632-680):
+ - Parses `.sddraft` XML with DOM
+ - Updates `maxRecordCount`: 2000 → 10000 (supports larger queries)
+ - Updates `ServiceTitle` to feature service title
+ - Writes modified XML back to `.sddraft`
+ - Calls `feature_sharing_draft_report()` for validation display
+ - **Service staging & upload** (Lines 682-710):
+ - `arcpy.server.StageService()`: Creates `.sd` service definition from `.sddraft`
+ - `arcpy.server.UploadServiceDefinition()`: Publishes to Portal with parameters:
+ - `in_server = "HOSTING_SERVER"` (cloud-based Portal, not federated)
+ - `in_folder_type = "FROM_SERVICE_DEFINITION"` (uses embedded folder path)
+ - `in_startupType = "STARTED"` (service starts immediately after publishing)
+ - `in_override = "OVERRIDE_DEFINITION"` (replaces existing service)
+ - `in_my_contents = "NO_SHARE_ONLINE"` (no automatic sharing)
+ - `in_public = "PRIVATE"` (private by default; organization can publish)
+ - `in_organization = "NO_SHARE_ORGANIZATION"` (no org-wide sharing)
+ - **Post-publishing**: Lists all maps in project, saves APRX, cleanup
+ - **Integration**: Consumes layer files from `create_feature_class_layers()`, outputs web services on Portal
+
+ 4. **`create_image_services(project_gdb="")`** (Lines 902-1220)
+ - **Status**: Partially implemented/commented out; framework present but not fully active
+ - **Intended purpose**: Image service publishing for mosaic datasets (multidimensional rasters)
+ - **Expected workflow** (from commented code):
+ - Creates mosaic dataset from source rasters
+ - Generates image service definition draft via `CreateImageSDDraft()`
+ - Stages and uploads image service to Portal via ArcGIS Server
+ - Supports multidimensional imagery with time and variable dimensions
+ - **Current state**: Only skeleton implementation; all operational logic in commented sections
+ - **Note**: Mosaic creation itself handled by `create_mosaics_director/worker` (prior stage); this would consume those mosaics
+
+ 5. **`create_maps(project_gdb="")` & metadata template functions** (Lines 1222-2550+)
+ - **Status**: Mostly commented out (development/archive code)
+ - **Purpose**: Map layout generation, XML metadata template creation/import, thumbnails
+ - **Key archived patterns** (from commented code):
+ - Dataset enumeration via `arcpy.da.Walk()` (GDB traversal)
+ - Metadata template assignment per dataset type (Indicators, Sample_Locations, Mosaic, etc.)
+ - XML metadata export/import: `saveAsXML()`, metadata copying from templates
+ - Year range extraction for temporal datasets: `unique_years()` function
+ - Layout creation and export to JPEG (map thumbnails)
+ - **Note**: These functions likely evolve as Portal metadata publishing strategy matures
+
+ 6. **`script_tool(project_gdb="")`** (Lines 2563-2700)
+ - **Orchestration driver**: Central control logic for all publishing functions
+ - **Execution flags** (all currently False for development/testing):
+ - `CreateFeatureClassLayers = False`
+ - `CreateFeaturClasseServices = False`
+ - `CreateImagesServices = False`
+ - `CreateMaps = False`
+ - **Timing & diagnostics**:
+ - Captures start time via `time.time()`
+ - Logs: Python version, environment name, execution location
+ - Calculates elapsed time in H:M:S format
+ - **Workspace setup**: Creates scratch GDB if missing; sets ArcPy environment
+ - **Integration point**: Takes `project_gdb` parameter (default: `August 1 2025.gdb`)
+ - **Error handling**: Wraps all function calls in try/except with SystemExit propagation
+
+ **Data Pipeline Integration:**
+
+ ```
+ Upstream inputs:
+ ├── Feature Classes (created by create_region_sample_locations)
+ │ └── *Sample_Locations, DisMAP_Regions
+ ├── Tables (created by create_indicators_table, create_species_year_image_name_table)
+ │ └── Indicators, Species_Filter, LayerSpeciesYearImageName, Survey Info
+ └── Mosaics & CRFs (created by create_mosaics_director/worker)
+ └── {Region}_Mosaic, {Region}_Mosaic.crf
+
+ Publishing outputs:
+ ├── Feature Services (hosted on Portal)
+ │ └── {Dataset Service Title} (with web-accessible points, regions, indicators)
+ ├── Image Services (intended, currently inactive)
+ │ └── Multidimensional mosaic services (species × year)
+ ├── Layer Files (.lyrx format)
+ │ └── Stored locally for reuse, metadata-enriched
+ └── Metadata XML exports
+ └── Formatted metadata in Metadata_Export folder
+ ```
+
+ **Technical Patterns & Features:**
+
+ - **Metadata-driven architecture**: Uses `dataset_title_dict()` to lookup service titles, descriptions, credits from centralized metadata dictionary—enables dynamic naming without hardcoding
+ - **XML DOM manipulation**: Direct parsing/modification of `.sddraft` files to adjust service parameters (maxRecordCount, ServiceTitle) before staging
+ - **Layer file hierarchy**: `.lyrx` files serve as reusable layer definitions—contain symbology, field visibility (e.g., specific fields marked VISIBLE NONE for filtering), time configuration
+ - **Portal folder organization**: All services grouped under `"DisMAP {project_name}"` folder for organizational clarity
+ - **Temporal configuration**: Automatic time enablement for datasets with `StdTime` field; UTC timezone standardization
+ - **Metadata synchronization**: `metadata.synchronize("ALWAYS")` ensures dataset metadata propagates to feature layers and maps
+ - **Field visibility control**: Table views configured with specific field info string (26+ fields enumerated for Indicators table with VISIBLE/NONE flags)
+
+ **Known Limitations & Development Notes:**
+
+ - **Image services incomplete**: Framework present but all operational code commented out; mosaic publishing not yet activated
+ - **Hardcoded test paths**: Default `project_gdb` points to `"August 1 2025"` directory; production deployment requires parameterization
+ - **Portal connection**: Requires authenticated ArcGIS Pro environment; commented code shows Portal URL examples (`https://noaa.maps.arcgis.com/`)
+ - **Single-script execution**: No multiprocessing (unlike previous 9 directors); sequential service publishing—upload time depends on dataset complexity
+ - **Service definition draft report**: XML parsing via custom function; relies on specific key/value structure (fragile if ArcGIS service format changes)
+
+ **Execution Dependencies & Prerequisites:**
+
+ - ArcGIS Pro with valid Portal credentials
+ - Active workspace: project GDB with all upstream datasets
+ - Layer files pre-created (normally output from `create_feature_class_layers()`)
+ - Scratch workspace for temporary operations
+ - Network access to Portal for upload/publish operations
+
+#### Suggestions and Comments
+
+If you see that the data, product, or metadata can be improved, you are invited to create a [pull request](https://github.com/nmfs-fish-tools/DisMAP/pulls) or [submit an issue to the code’s repository](https://github.com/nmfs-fish-tools/DisMAP/issues).
+
+#### NOAA-NMFS GitHub Enterprise Disclaimer
+
+This repository is a scientific product and is not official communication of the National Oceanic and Atmospheric Administration, or the United States Department of Commerce. All NOAA GitHub project code is provided on an ‘as is’ basis and the user assumes responsibility for its use. Any claims against the Department of Commerce or Department of Commerce bureaus stemming from the use of this GitHub project will be governed by all applicable Federal law. Any reference to specific commercial products, processes, or services by service mark, trademark, manufacturer, or otherwise, does not constitute or imply their endorsement, recommendation or favoring by the Department of Commerce.
+The Department of Commerce seal and logo, or the seal and logo of a DOC bureau, shall not be used in any manner to imply endorsement of any commercial product or activity by DOC or the United States Government.
+
+#### NOAA License
+
+Software code created by U.S. Government employees is not subject to copyright in the United States (17 U.S.C. §105). The United States/Department of Commerce reserve all rights to seek and obtain copyright protection in countries other than the United States for Software authored in its entirety by the Department of Commerce. To this end, the Department of Commerce hereby grants to Recipient a royalty-free, nonexclusive license to use, copy, and create derivative works of the Software outside of the United States.
+
+
+
+[U.S. Department of Commerce](https://www.commerce.gov/) \| [National Oceanographic and Atmospheric Administration](https://www.noaa.gov) \| [NOAA Fisheries](https://www.fisheries.noaa.gov/)
+
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/copy_initial_data.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/copy_initial_data.py
new file mode 100644
index 0000000..00e0020
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/copy_initial_data.py
@@ -0,0 +1,205 @@
+"""
+Script documentation
+- Tool parameters are accessed using arcpy.GetParameter() or
+ arcpy.GetParameterAsText()
+- Update derived parameter values using arcpy.SetParameter() or
+ arcpy.SetParameterAsText()
+"""
+import os
+
+import arcpy
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def script_tool(project_folder="", csv_data_file="", dataset_shapefiles="", contacts_file=""):
+ """Script code goes below"""
+ try:
+ from zipfile import ZipFile
+ from arcpy import metadata as md
+ from lxml import etree
+ from io import StringIO
+
+ arcpy.env.overwriteOutput = True
+
+ #aprx = arcpy.mp.ArcGISProject("CURRENT")
+ #aprx.save()
+ #project_folder = aprx.homeFolder
+ arcpy.AddMessage(project_folder)
+ out_data_path = rf"{project_folder}\CSV_Data"
+
+ import json
+ json_path = rf"{out_data_path}\root_dict.json"
+ with open(json_path, "r") as json_file:
+ root_dict = json.load(json_file)
+ del json_file
+ del json_path
+ del json
+
+ arcpy.AddMessage(out_data_path)
+ # Change Directory
+ os.chdir(out_data_path)
+ arcpy.AddMessage(f"Un-Zipping files from {os.path.basename(csv_data_file)}")
+ with ZipFile(csv_data_file, mode="r") as archive:
+ for file in archive.namelist():
+ archive.extract(file, ".")
+ del file
+ del archive
+ arcpy.AddMessage(f"Done Un-Zipping files from {os.path.basename(csv_data_file)}")
+ tmp_workspace = arcpy.env.workspace
+ arcpy.env.workspace = rf"{out_data_path}\python"
+
+ csv_files = arcpy.ListFiles("*_survey.csv")
+
+ arcpy.AddMessage("Copying CSV Files and renaming the file")
+ for csv_file in csv_files:
+ arcpy.management.Copy(rf"{out_data_path}\python\{csv_file}", rf"{out_data_path}\{csv_file.replace('_survey', '_IDW')}")
+ del csv_file
+ del csv_files
+
+ arcpy.env.workspace = tmp_workspace
+ del tmp_workspace
+
+ if arcpy.Exists(rf"{out_data_path}\python"):
+ arcpy.AddMessage("Removing the extract folder")
+ arcpy.management.Delete(rf"{out_data_path}\python")
+ else:
+ pass
+
+ arcpy.AddMessage("Adding metadata to CSV file")
+ tmp_workspace = arcpy.env.workspace
+ arcpy.env.workspace = out_data_path
+
+ csv_files = arcpy.ListFiles("*_IDW.csv")
+ for csv_file in csv_files:
+ arcpy.AddMessage(f"\t{csv_file}")
+ dataset_md = md.Metadata(rf"{out_data_path}\{csv_file}")
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ dataset_md.importMetadata(contacts_file, "ARCGIS_METADATA")
+ dataset_md.save()
+ dataset_md.synchronize("OVERWRITE")
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ target_tree = etree.parse(StringIO(dataset_md.xml), parser=etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root = target_tree.getroot()
+ target_root[:] = sorted(target_root, key=lambda x: root_dict[x.tag])
+ new_item_name = target_root.find("Esri/DataProperties/itemProps/itemName").text
+ arcpy.AddMessage(new_item_name)
+## onLineSrcs = target_root.findall("distInfo/distTranOps/onLineSrc")
+## #arcpy.AddMessage(onLineSrcs)
+## for onLineSrc in onLineSrcs:
+## if onLineSrc.find('./protocol').text == "ESRI REST Service":
+## old_linkage_element = onLineSrc.find('./linkage')
+## old_linkage = old_linkage_element.text
+## #arcpy.AddMessage(old_linkage)
+## old_item_name = old_linkage[old_linkage.find("/services/")+len("/services/"):old_linkage.find("/FeatureServer")]
+## new_linkage = old_linkage.replace(old_item_name, new_item_name)
+## #arcpy.AddMessage(new_linkage)
+## old_linkage_element.text = new_linkage
+## #arcpy.AddMessage(old_linkage_element.text)
+## del old_linkage_element
+## del old_item_name, old_linkage, new_linkage
+## onLineSrc.find('./orName').text = f"{new_item_name} Feature Service"
+## del onLineSrcs, new_item_name
+ etree.indent(target_root, space=' ')
+ dataset_md.xml = etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+
+ del dataset_md
+
+ del csv_file
+ del csv_files
+
+ arcpy.env.workspace = tmp_workspace
+ del tmp_workspace
+
+ out_data_path = os.path.join(os.path.dirname(project_folder), "Dataset Shapefiles")
+ arcpy.AddMessage(out_data_path)
+
+ # Change Directory
+ os.chdir(out_data_path)
+ arcpy.AddMessage(f"Un-Zipping files from {os.path.basename(dataset_shapefiles)}")
+ with ZipFile(dataset_shapefiles, mode="r") as archive:
+ for file in archive.namelist():
+ archive.extract(file, ".")
+ del file
+ del archive
+ del out_data_path
+
+ # Imports
+ del md
+
+ # Function Variables
+ del project_folder, csv_data_file, dataset_shapefiles, contacts_file
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == "__main__":
+ try:
+ project_folder = arcpy.GetParameterAsText(0)
+ if not project_folder:
+ project_folder = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026")
+ else:
+ pass
+
+ csv_data_file = arcpy.GetParameterAsText(1)
+ if not csv_data_file:
+ csv_data_file = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\Initial Data\\CSV Data 20260201.zip")
+ else:
+ pass
+
+ contacts_file = arcpy.GetParameterAsText(2)
+ if not contacts_file:
+ contacts_file = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\Initial Data\\DisMAP Contacts 20260201.xml")
+ else:
+ pass
+
+ dataset_shapefiles = arcpy.GetParameterAsText(3)
+ if not dataset_shapefiles:
+ dataset_shapefiles = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\Initial Data\\Dataset Shapefiles 20260201.zip")
+ else:
+ pass
+
+ script_tool(project_folder, csv_data_file, dataset_shapefiles, contacts_file)
+
+ arcpy.SetParameterAsText(3, True)
+
+ del project_folder, csv_data_file, dataset_shapefiles, contacts_file
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_base_bathymetry.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_base_bathymetry.py
new file mode 100644
index 0000000..541a185
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_base_bathymetry.py
@@ -0,0 +1,742 @@
+# -*- coding: utf-8 -*-
+#-------------------------------------------------------------------------------
+# Name: create_base_bathymetry
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 05/03/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+#-------------------------------------------------------------------------------
+import os
+import sys
+
+import arcpy # third-parties second
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def raster_properties_report(dataset=""):
+ try:
+ if not dataset:
+ arcpy.AddWarning(f"{dataset} is missing")
+ else:
+ pixel_types = {"U1" : "1 bit", "U2" : "2 bits", "U4" : "4 bits",
+ "U8" : "Unsigned 8-bit integers", "S8" : "8-bit integers",
+ "U16" : "Unsigned 16-bit integers", "S16" : "16-bit integers",
+ "U32" : "Unsigned 32-bit integers", "S32" : "32-bit integers",
+ "F32" : "Single-precision floating point",
+ "F64" : "Double-precision floating point",}
+
+ raster = arcpy.Raster(dataset)
+
+ arcpy.AddMessage(f"\t\t {raster.name}")
+ arcpy.AddMessage(f"\t\t\t Spatial Reference: {raster.spatialReference.name}")
+ arcpy.AddMessage(f"\t\t\t XYResolution: {raster.spatialReference.XYResolution} {raster.spatialReference.linearUnitName}s")
+ arcpy.AddMessage(f"\t\t\t XYTolerance: {raster.spatialReference.XYTolerance} {raster.spatialReference.linearUnitName}s")
+ arcpy.AddMessage(f"\t\t\t Extent: {raster.extent.XMin} {raster.extent.YMin} {raster.extent.XMax} {raster.extent.YMax} (XMin, YMin, XMax, YMax)")
+ arcpy.AddMessage(f"\t\t\t Cell Size: {raster.meanCellHeight}, {raster.meanCellWidth} (H, W)")
+ arcpy.AddMessage(f"\t\t\t Rows, Columns: {raster.height} {raster.width} (H, W)")
+ arcpy.AddMessage(f"\t\t\t Statistics: {raster.minimum} {raster.maximum} {raster.mean} {raster.standardDeviation} (Min, Max, Mean, STD)")
+ arcpy.AddMessage(f"\t\t\t Pixel Type: {pixel_types[raster.pixelType]}")
+
+ del raster
+ del pixel_types
+ del dataset
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def create_alasaka_bathymetry(project_folder=""):
+ try:
+ # Imports
+ from dismap_tools import check_transformation
+
+ # Set History and Metadata logs, set serverity and message level
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic workkpace variables
+ arcpy.env.workspace = rf"{project_folder}\Bathymetry\Bathymetry.gdb"
+ arcpy.env.scratchWorkspace = rf"{project_folder}\Scratch\scratch.gdb"
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ arcpy.env.cellSize = 1000
+
+ arcpy.env.pyramid = "PYRAMIDS -1 BILINEAR DEFAULT 75 NO_SKIP"
+ arcpy.env.rasterStatistics = "STATISTICS 1 1"
+ arcpy.env.resamplingMethod = "BILINEAR"
+
+ arcpy.env.outputCoordinateSystem = None
+
+ arcpy.AddMessage("Processing Alaska Bathymetry")
+
+# ###--->>> Setting up the base folder bathymetry for all projects
+ # Set Alaska Bathymetry
+
+ ai_bathy = rf"{project_folder}\Bathymetry\Alaska Bathymetry\AI_IDW_Bathy.grd" # ASCII GRIDs
+ ebs_bathy = rf"{project_folder}\Bathymetry\Alaska Bathymetry\EBS_IDW_Bathy.grd" # ASCII GRIDs
+ goa_bathy = rf"{project_folder}\Bathymetry\Alaska Bathymetry\GOA_IDW_Bathy.grd" # ASCII GRIDs
+
+ ai_bathy_grid = rf"{project_folder}\Bathymetry\Bathymetry.gdb\AI_IDW_Bathy_Grid" # ASCII GRIDs imported to the FGDB
+ ebs_bathy_grid = rf"{project_folder}\Bathymetry\Bathymetry.gdb\EBS_IDW_Bathy_Grid" # ASCII GRIDs imported to the FGDB
+ goa_bathy_grid = rf"{project_folder}\Bathymetry\Bathymetry.gdb\GOA_IDW_Bathy_Grid" # ASCII GRIDs imported to the FGDB
+
+ ai_bathy_raster = rf"{project_folder}\Bathymetry\Bathymetry.gdb\AI_IDW_Bathy_Raster"
+ ebs_bathy_raster = rf"{project_folder}\Bathymetry\Bathymetry.gdb\EBS_IDW_Bathy_Raster"
+ goa_bathy_raster = rf"{project_folder}\Bathymetry\Bathymetry.gdb\GOA_IDW_Bathy_Raster"
+
+ ai_bathymetry = rf"{project_folder}\Bathymetry\Bathymetry.gdb\AI_IDW_Bathymetry"
+ ebs_bathymetry = rf"{project_folder}\Bathymetry\Bathymetry.gdb\EBS_IDW_Bathymetry"
+ goa_bathymetry = rf"{project_folder}\Bathymetry\Bathymetry.gdb\GOA_IDW_Bathymetry"
+ enbs_bathymetry = rf"{project_folder}\Bathymetry\Bathymetry.gdb\ENBS_IDW_Bathymetry"
+ nbs_bathymetry = rf"{project_folder}\Bathymetry\Bathymetry.gdb\NBS_IDW_Bathymetry"
+
+ arcpy.AddMessage("Processing Esri Raster Grids")
+
+ spatial_ref = arcpy.Describe(ai_bathy).spatialReference.name
+ arcpy.AddMessage(f"Spatial Reference for {os.path.basename(ai_bathy)}: {spatial_ref}")
+
+ spatial_ref = arcpy.Describe(ai_bathy).spatialReference
+ # Set Output Coordinate System
+ arcpy.env.outputCoordinateSystem = spatial_ref
+
+ if spatial_ref.linearUnitName == "Kilometer":
+ arcpy.env.cellSize = 1
+ arcpy.env.XYResolution = 0.1
+ arcpy.env.XYResolution = 1.0
+ elif spatial_ref.linearUnitName == "Meter":
+ arcpy.env.cellSize = 1000
+ arcpy.env.XYResolution = 0.0001
+ arcpy.env.XYResolution = 0.001
+
+ del spatial_ref
+
+ arcpy.AddMessage("Copy AI_IDW_Bathy.grd to AI_IDW_Bathy_Grid")
+
+ arcpy.management.CopyRaster(ai_bathy, ai_bathy_grid)
+ arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ del ai_bathy
+
+ arcpy.AddMessage("Copy EBS_IDW_Bathy.grd to EBS_IDW_Bathy_Grid")
+
+ arcpy.management.CopyRaster(ebs_bathy, ebs_bathy_grid)
+ arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ del ebs_bathy
+
+ arcpy.AddMessage("Copy GOA_IDW_Bathy.grd to GOA_IDW_Bathy_Grid")
+
+ arcpy.management.CopyRaster(goa_bathy, goa_bathy_grid)
+ arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ del goa_bathy
+
+ arcpy.AddMessage("Converting AI_IDW_Bathy_Grid from positive values to negative")
+
+ tmp_grid = arcpy.sa.Times(ai_bathy_grid, -1)
+ arcpy.AddMessage("\tTimes: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ tmp_grid.save(ai_bathy_raster)
+ del tmp_grid
+
+ arcpy.AddMessage("Converting EBS_IDW_Bathy_Grid from positive values to negative")
+
+ tmp_grid = arcpy.sa.Times(ebs_bathy_grid, -1)
+ arcpy.AddMessage("\tTimes: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ tmp_grid.save(ebs_bathy_raster)
+ del tmp_grid
+
+ arcpy.AddMessage("Setting values equal to and less than 0 in the GOA_IDW_Bathy_Grid Null values")
+
+ tmp_grid = arcpy.sa.SetNull(goa_bathy_grid, goa_bathy_grid, "Value < -1.0")
+ arcpy.AddMessage("\tSet Null: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ tmp_grid.save(goa_bathy_raster+'_SetNull')
+ del tmp_grid
+
+ arcpy.AddMessage("Converting the GOA_IDW_Bathy_Grid from positive values to negative")
+
+ tmp_grid = arcpy.sa.Times(goa_bathy_raster+'_SetNull', -1)
+ arcpy.AddMessage("\tTimes: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ tmp_grid.save(goa_bathy_raster)
+ del tmp_grid
+
+ arcpy.AddMessage("Deleteing the GOA_IDW_Bathy Null grid")
+
+ arcpy.management.Delete(goa_bathy_raster+'_SetNull')
+ arcpy.AddMessage("\tDelete: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.AddMessage("Appending the AI raster to the GOA grid to ensure complete coverage")
+
+ extent = arcpy.Describe(goa_bathy_raster).extent
+ X_Min, Y_Min, X_Max, Y_Max = extent.XMin-(1000 * 366), extent.YMin-(1000 * 80), extent.XMax, extent.YMax
+ extent = f"{X_Min} {Y_Min} {X_Max} {Y_Max}"
+ arcpy.env.extent = extent
+
+ arcpy.management.Append(inputs = ai_bathy_raster, target = goa_bathy_raster, schema_type="TEST", field_mapping="", subtype="")
+ arcpy.AddMessage("\tAppend: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.AddMessage("Cliping GOA Raster")
+
+ arcpy.management.Clip(goa_bathy_raster, extent, goa_bathy_raster+"_Clip")
+ arcpy.AddMessage("\tClip: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del extent
+
+ arcpy.AddMessage("Copying GOA Raster")
+
+ arcpy.management.CopyRaster(goa_bathy_raster+"_Clip", goa_bathy_raster)
+ arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.management.Delete(goa_bathy_raster+"_Clip")
+ arcpy.AddMessage("\tDelete: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.AddMessage("Appending the EBS raster to the AI grid to ensure complete coverage")
+
+ extent = arcpy.Describe(ai_bathy_raster).extent
+ X_Min, Y_Min, X_Max, Y_Max = extent.XMin, extent.YMin, extent.XMax, extent.YMax
+ extent = f"{X_Min} {Y_Min} {X_Max} {Y_Max}"
+ arcpy.env.extent = extent
+ del X_Min, Y_Min, X_Max, Y_Max
+
+ arcpy.management.Append(inputs = ebs_bathy_raster, target = ai_bathy_raster, schema_type="TEST", field_mapping="", subtype="")
+ arcpy.AddMessage("\tAppend: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.AddMessage("Cliping AI Raster")
+
+ arcpy.management.Clip(ai_bathy_raster, extent, ai_bathy_raster+"_Clip")
+ arcpy.AddMessage("\tClip: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del extent
+
+ arcpy.AddMessage("Copying AI Raster")
+
+ arcpy.management.CopyRaster(ai_bathy_raster+"_Clip", ai_bathy_raster)
+ arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ #arcpy.management.Delete(ai_bathy_raster+"_Clip")
+ #arcpy.AddMessage("\tDelete: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.ClearEnvironment("extent")
+
+ # ###--->>> Final copy of rasters in Base Folder Start
+
+ # Get the reference system defined for the region in datasets
+ # Set the output coordinate system to what is needed for the
+ # DisMAP project
+ region = "AI_IDW"
+ arcpy.env.outputCoordinateSystem = arcpy.SpatialReference(rf"{project_folder}\Dataset Shapefiles\{region}\{region}_Region.prj")
+ #arcpy.env.geographicTransformations = "WGS_1984_(ITRF08)_To_NAD_1983_2011"
+ #arcpy.env.geographicTransformations = check_transformation(goa_bathy_raster, region_sr)
+ #del region_sr
+ del region
+
+ arcpy.AddMessage("Copy AI_IDW_Bathymetry_Raster to AI_IDW_Bathymetry")
+
+ arcpy.management.CopyRaster(ai_bathy_raster, ai_bathymetry)
+ arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ # Get the reference system defined for the region in datasets
+ # Set the output coordinate system to what is needed for the
+ # DisMAP project
+ region = "EBS_IDW"
+ arcpy.env.outputCoordinateSystem = arcpy.SpatialReference(rf"{project_folder}\Dataset Shapefiles\{region}\{region}_Region.prj")
+ #arcpy.env.geographicTransformations = "WGS_1984_(ITRF08)_To_NAD_1983_2011"
+ #arcpy.env.geographicTransformations = check_transformation(goa_bathy_raster, region_sr)
+ #del region_sr
+ del region
+
+ arcpy.AddMessage("Copy EBS_IDW_Bathymetry_Raster to EBS_IDW_Bathymetry")
+
+ arcpy.management.CopyRaster(ebs_bathy_raster, ebs_bathymetry)
+ arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ # Get the reference system defined for the region in datasets
+ # Set the output coordinate system to what is needed for the
+ # DisMAP project
+ region = "ENBS_IDW"
+ arcpy.env.outputCoordinateSystem = arcpy.SpatialReference(rf"{project_folder}\Dataset Shapefiles\{region}\{region}_Region.prj")
+ #arcpy.env.geographicTransformations = "WGS_1984_(ITRF08)_To_NAD_1983_2011"
+ #arcpy.env.geographicTransformations = check_transformation(goa_bathy_raster, region_sr)
+ #del region_sr
+ del region
+
+ arcpy.AddMessage("Copy EBS_IDW_Bathymetry_Raster to ENBS_Bathymetry")
+
+ arcpy.management.CopyRaster(ebs_bathy_raster, enbs_bathymetry)
+ arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ # Get the reference system defined for the region in datasets
+ # Set the output coordinate system to what is needed for the
+ # DisMAP project
+ region = "NBS_IDW"
+ arcpy.env.outputCoordinateSystem = arcpy.SpatialReference(rf"{project_folder}\Dataset Shapefiles\{region}\{region}_Region.prj")
+ #arcpy.env.geographicTransformations = "WGS_1984_(ITRF08)_To_NAD_1983_2011"
+ #arcpy.env.geographicTransformations = check_transformation(goa_bathy_raster, region_sr)
+ #del region_sr
+ del region
+
+ arcpy.AddMessage("Copy EBS_IDW_Bathymetry_Raster to NBS_Bathymetry")
+
+ arcpy.management.CopyRaster(ebs_bathy_raster, nbs_bathymetry)
+ arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ # Get the reference system defined for the region in datasets
+ # Set the output coordinate system to what is needed for the
+ # DisMAP project
+ region = "GOA_IDW"
+ arcpy.env.outputCoordinateSystem = arcpy.SpatialReference(rf"{project_folder}\Dataset Shapefiles\{region}\{region}_Region.prj")
+ #arcpy.env.geographicTransformations = "WGS_1984_(ITRF08)_To_NAD_1983_2011"
+ #arcpy.env.geographicTransformations = check_transformation(goa_bathy_raster, region_sr)
+ #del region_sr
+ del region
+
+ arcpy.AddMessage("Copy GOA_IDW_Bathymetry_Raster to GOA_IDW_Bathymetry")
+
+ arcpy.management.CopyRaster(goa_bathy_raster, goa_bathymetry)
+ arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ del ai_bathy_grid, ebs_bathy_grid, goa_bathy_grid
+ del ai_bathy_raster, ebs_bathy_raster, goa_bathy_raster
+
+ # ###--->>> Final copy of rasters in Base Folder End
+
+ # ###--->>> Copy rasters for Base Folder to Project Folder Start
+
+## # Set Output Coordinate System
+## arcpy.env.outputCoordinateSystem = arcpy.Describe(ai_bathymetry).spatialReference
+##
+## arcpy.AddMessage("Copy AI_IDW_Bathymetry to the Project Bathymetry GDB")
+## arcpy.management.CopyRaster(ai_bathymetry, rf"{project_folder}\Bathymetry\Bathymetry.gdb\{os.path.basename(ai_bathymetry)}")
+## arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+##
+## # Set Output Coordinate System
+## arcpy.env.outputCoordinateSystem = arcpy.Describe(ebs_bathymetry).spatialReference
+##
+## arcpy.AddMessage("Copy EBS_IDW_Bathymetry to the Project Bathymetry GDB")
+## arcpy.management.CopyRaster(ebs_bathymetry, rf"{project_folder}\Bathymetry\Bathymetry.gdb\{os.path.basename(ebs_bathymetry)}")
+## arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+##
+## # Set Output Coordinate System
+## arcpy.env.outputCoordinateSystem = arcpy.Describe(enbs_bathymetry).spatialReference
+##
+## arcpy.AddMessage("Copy ENBS_IDW_Bathymetry to the Project Bathymetry GDB")
+## arcpy.management.CopyRaster(enbs_bathymetry, rf"{project_folder}\Bathymetry\Bathymetry.gdb\{os.path.basename(enbs_bathymetry)}")
+## arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+##
+## # Set Output Coordinate System
+## arcpy.env.outputCoordinateSystem = arcpy.Describe(nbs_bathymetry).spatialReference
+##
+## arcpy.AddMessage("Copy nbs_bathymetry to the Project Bathymetry GDB")
+## arcpy.management.CopyRaster(nbs_bathymetry, rf"{project_folder}\Bathymetry\Bathymetry.gdb\{os.path.basename(nbs_bathymetry)}")
+## arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+##
+## # Set Output Coordinate System
+## arcpy.env.outputCoordinateSystem = arcpy.Describe(goa_bathymetry).spatialReference
+##
+## arcpy.AddMessage("Copy GOA_IDW_Bathymetry to the Project Bathymetry GDB")
+## arcpy.management.CopyRaster(goa_bathymetry, rf"{project_folder}\Bathymetry\Bathymetry.gdb\{os.path.basename(goa_bathymetry)}")
+## arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ gdb = rf"{project_folder}\Bathymetry\Bathymetry.gdb"
+ arcpy.AddMessage("Compacting the {os.path.basename(gdb)} GDB")
+ arcpy.management.Compact(gdb)
+ arcpy.AddMessage("\t"+arcpy.GetMessages(0).replace("\n", "\n\t"))
+ del gdb
+
+ # Imports
+ del check_transformation
+ # Declared Variables for this function only
+ del ai_bathymetry, ebs_bathymetry, goa_bathymetry, enbs_bathymetry, nbs_bathymetry
+ # Function parameter
+ del project_folder
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def create_hawaii_bathymetry(project_folder=""):
+ try:
+ # Set History and Metadata logs, set serverity and message level
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic workkpace variables
+ arcpy.env.workspace = rf"{project_folder}\Bathymetry\Bathymetry.gdb"
+ arcpy.env.scratchWorkspace = rf"{project_folder}\Scratch\scratch.gdb"
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ arcpy.env.cellSize = 500
+
+ arcpy.env.pyramid = "PYRAMIDS -1 BILINEAR DEFAULT 75 NO_SKIP"
+ arcpy.env.rasterStatistics = "STATISTICS 1 1"
+ arcpy.env.resamplingMethod = "BILINEAR"
+
+ arcpy.env.outputCoordinateSystem = None
+
+ hi_bathy_grid = rf"{project_folder}\Bathymetry\Hawaii Bathymetry\BFISH_PSU.shp"
+ hi_bathy_raster = rf"{project_folder}\Bathymetry\Bathymetry.gdb\HI_IDW_Bathy_Raster"
+ hi_bathymetry = rf"{project_folder}\Bathymetry\Bathymetry.gdb\HI_IDW_Bathymetry"
+
+ arcpy.AddMessage("Converting Hawaii Polygon Grid to a Raster")
+
+ arcpy.conversion.PolygonToRaster(in_features = hi_bathy_grid, value_field = "Depth_MEDI", out_rasterdataset = hi_bathy_raster, cell_assignment="CELL_CENTER", priority_field="NONE", cellsize="500")
+ arcpy.AddMessage("\tPolygon To Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ tmp_grid = arcpy.sa.Times(hi_bathy_raster, -1.0)
+ arcpy.AddMessage("\tTimes: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ tmp_grid.save(hi_bathymetry)
+ del tmp_grid
+
+## arcpy.AddMessage("Copy Hawaii Raster to the Bathymetry GDB")
+##
+## arcpy.management.CopyRaster(hi_bathymetry, rf"{project_folder}\Bathymetry\Bathymetry.gdb\HI_IDW_Bathymetry")
+## arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+##
+## gdb = rf"{project_folder}\Bathymetry\Bathymetry.gdb"
+## arcpy.AddMessage(f"Compacting the {os.path.basename(gdb)} GDB")
+## arcpy.management.Compact(gdb)
+## arcpy.AddMessage("\t"+arcpy.GetMessages(0).replace("\n", "\n\t"))
+## del gdb
+
+ gdb = rf"{project_folder}\Bathymetry\Bathymetry.gdb"
+ arcpy.AddMessage(f"Compacting the {os.path.basename(gdb)} GDB")
+ arcpy.management.Compact(gdb)
+ arcpy.AddMessage("\t"+arcpy.GetMessages(0).replace("\n", "\n\t"))
+ del gdb
+
+ # Declared Variables for this function only
+ del hi_bathy_grid, hi_bathy_raster, hi_bathymetry
+ # Function parameter
+ del project_folder
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def gebco_bathymetry(project_folder=""):
+ try:
+
+ # Imports
+ from dismap_tools import check_transformation
+
+ # Set History and Metadata logs, set serverity and message level
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic workkpace variables
+ arcpy.env.workspace = rf"{project_folder}\Bathymetry\Bathymetry.gdb"
+ arcpy.env.scratchWorkspace = rf"{project_folder}\Scratch\scratch.gdb"
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ arcpy.env.cellSize = 1000
+
+ arcpy.env.pyramid = "PYRAMIDS -1 BILINEAR DEFAULT 75 NO_SKIP"
+ arcpy.env.rasterStatistics = "STATISTICS 1 1"
+ arcpy.env.resamplingMethod = "BILINEAR"
+
+ arcpy.env.outputCoordinateSystem = None
+
+ arcpy.AddMessage("Processing GEBCO Raster Grids")
+
+ #gebco_dict = get_dms_points_for_gebco(project_gdb)
+ gebco_dict = {
+ 'GMEX_IDW' : 'gebco_2022_n30.6_s25.8_w-97.4_e-81.6.asc',
+ 'NEUS_FAL_IDW' : 'gebco_2022_n44.8_s35.0_w-75.8_e-65.4.asc',
+ 'NEUS_SPR_IDW' : 'gebco_2022_n44.8_s35.0_w-75.8_e-65.4.asc',
+ 'SEUS_FAL_IDW' : 'gebco_2022_n35.4_s28.6_w-81.4_e-75.6.asc',
+ 'SEUS_SPR_IDW' : 'gebco_2022_n35.4_s28.6_w-81.4_e-75.6.asc',
+ 'SEUS_SUM_IDW' : 'gebco_2022_n35.4_s28.6_w-81.4_e-75.6.asc',
+ 'WC_ANN_IDW' : 'gebco_2022_n48.6_s32.0_w-126.0_e-115.8.asc',
+ 'WC_TRI_IDW' : 'gebco_2022_n49.2_s36.0_w-126.6_e-121.6.asc',
+ }
+
+ arcpy.AddMessage("Processing Regions")
+ # Start looping over the datasets array as we go region by region.
+ for table_name in gebco_dict:
+ gebco_file_name = gebco_dict[table_name]
+
+ gebco_grid = rf"{project_folder}\Bathymetry\GEBCO Bathymetry\{gebco_file_name}"
+ bathy_grid = rf"{project_folder}\Bathymetry\Bathymetry.gdb\{table_name}_Bathy_Grid"
+ bathy_raster = rf"{project_folder}\Bathymetry\Bathymetry.gdb\{table_name}_Bathy_Raster"
+ bathymetry = rf"{project_folder}\Bathymetry\Bathymetry.gdb\{table_name}_Bathymetry"
+
+ arcpy.AddMessage(f"Copy GEBCO File: {os.path.basename(gebco_grid)} to {os.path.basename(bathy_grid)}")
+
+ # Execute ASCIIToRaster
+ arcpy.conversion.ASCIIToRaster(gebco_grid, bathy_grid, "FLOAT")
+ arcpy.AddMessage("\tASCII To Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.AddMessage(f"Define projection for {os.path.basename(bathy_grid)}")
+
+ arcpy.management.DefineProjection(bathy_grid, gebco_grid.replace('.asc', '.prj'))
+ arcpy.AddMessage("\tDefine Projection: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.AddMessage(f"Project Raster to create: {os.path.basename(bathy_raster)}")
+
+ # Get the reference system defined for the region in datasets
+ # Set the output coordinate system to what is needed for the
+ # DisMAP project
+ region_sr = arcpy.SpatialReference(rf"{project_folder}\Dataset Shapefiles\{table_name}\{table_name}_Region.prj")
+
+ if region_sr.linearUnitName == "Kilometer":
+ arcpy.env.cellSize = 0.1
+ arcpy.env.XYResolution = 0.0001
+ arcpy.env.XYResolution = 0.001
+ elif region_sr.linearUnitName == "Meter":
+ arcpy.env.cellSize = 1000
+ arcpy.env.XYResolution = 0.0001
+ arcpy.env.XYResolution = 0.001
+
+ arcpy.env.outputCoordinateSystem = region_sr
+ #arcpy.env.geographicTransformations = "WGS_1984_(ITRF08)_To_NAD_1983_2011"
+ transform = check_transformation(bathy_grid, region_sr)
+ arcpy.env.geographicTransformations = transform
+
+ arcpy.AddMessage(f"\tOut Spatial Reference: {region_sr.name}")
+ arcpy.AddMessage(f"\tGeographic Transformations: {transform}")
+
+ # Project Raster management
+ arcpy.management.ProjectRaster(in_raster = bathy_grid, out_raster = bathy_raster, out_coor_system = region_sr)
+ arcpy.AddMessage("\tProject Raster: {0}".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ # Cleanup after last use
+ del region_sr, transform
+
+ arcpy.AddMessage(f"Set Null for positive elevation values to create: {os.path.basename(bathymetry)}")
+ with arcpy.EnvManager(scratchWorkspace=arcpy.env.scratchGDB, workspace=arcpy.env.workspace):
+ out_raster = arcpy.sa.SetNull(
+ in_conditional_raster = bathy_raster,
+ in_false_raster_or_constant = bathy_raster,
+ where_clause="Value > 1.0"
+ )
+ arcpy.AddMessage("\tSet Null: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ out_raster.save(bathymetry)
+ del out_raster
+
+ del gebco_grid, bathy_grid, bathy_raster, bathymetry
+ del gebco_file_name, table_name
+
+ del gebco_dict
+
+ gdb = rf"{project_folder}\Bathymetry\Bathymetry.gdb"
+ arcpy.AddMessage(f"Compacting the {os.path.basename(gdb)} GDB")
+ arcpy.management.Compact(gdb)
+ arcpy.AddMessage("\t"+arcpy.GetMessages(0).replace("\n", "\n\t"))
+ del gdb
+
+ # Declared Variables for this function only
+
+ # Imports
+ del check_transformation
+ # Function parameter
+ del project_folder
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def main(project_folder=""):
+ try:
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+ # Create Scratch Workspace for Project
+ if not arcpy.Exists(rf"{project_folder}\Scratch\scratch.gdb"):
+ if not arcpy.Exists(rf"{project_folder}\Scratch"):
+ os.makedirs(rf"{project_folder}\Scratch")
+ if not arcpy.Exists(rf"{project_folder}\Scratch\scratch.gdb"):
+ arcpy.management.CreateFileGDB(rf"{project_folder}\Scratch", "scratch")
+
+ # Base Bathymetry Folder
+ if not os.path.isdir(rf"{project_folder}\Bathymetry"):
+ arcpy.AddMessage("Create Folder: 'Bathymetry'")
+ arcpy.management.CreateFolder(rf"{project_folder}\Bathymetry")
+ get_messages = "\t" + arcpy.GetMessages().replace('\n', '\n\t') + "\n"
+ arcpy.AddMessage(f"{get_messages}")
+ del get_messages
+ else:
+ pass
+ # Base Bathymetry GDB
+ if not arcpy.Exists(rf"{project_folder}\Bathymetry\Bathymetry.gdb"):
+ arcpy.AddMessage("Create File GDB: 'Bathymetry.gdb'")
+ arcpy.management.CreateFileGDB(rf"{project_folder}\Bathymetry", "Bathymetry")
+ get_messages = "\t" + arcpy.GetMessages().replace('\n', '\n\t') + "\n"
+ arcpy.AddMessage(f"{get_messages}")
+ del get_messages
+ else:
+ pass
+
+ test = True
+ # Process base Alaska bathymetry
+ if test:
+ result = create_alasaka_bathymetry(project_folder)
+ #arcpy.AddMessage(result)
+ del result
+ else:
+ pass
+
+ #test = False
+ # Process base Hawaii bathymetry
+ if test:
+ result = create_hawaii_bathymetry(project_folder)
+ #arcpy.AddMessage(result)
+ del result
+ else:
+ pass
+
+ #test = False
+ # Process base GEBCO bathymetry
+ if test:
+ result = gebco_bathymetry(project_folder)
+ #arcpy.AddMessage(result)
+ del result
+ else:
+ pass
+
+ del test
+
+ # Declared Varaiables
+
+ # Imports
+
+ # Function Parameters
+ del project_folder
+
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}\nCompleted: {strftime('%a %b %d %I:%M %p', localtime())}")
+ arcpy.AddMessage(u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time))))
+ arcpy.AddMessage(f"{'-' * 80}")
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == '__main__':
+ try:
+ project_folder = arcpy.GetParameterAsText(0)
+
+ if not project_folder:
+ project_folder = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python")
+ else:
+ pass
+
+ result = main(project_folder)
+ arcpy.SetParameterAsText(1, result)
+ del result
+
+ # Declared Variables
+ del project_folder
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_data_dictionary_json_files.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_data_dictionary_json_files.py
new file mode 100644
index 0000000..3ee0014
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_data_dictionary_json_files.py
@@ -0,0 +1,2273 @@
+"""
+Script documentation
+
+- Tool parameters are accessed using arcpy.GetParameter() or
+ arcpy.GetParameterAsText()
+- Update derived parameter values using arcpy.SetParameter() or
+ arcpy.SetParameterAsText()
+"""
+import arcpy
+import os
+import traceback
+import inspect
+
+def script_tool(project_gdb=""):
+ """Script code goes below"""
+ try:
+ # Imports
+ # Use all of the cores on the machine
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.env.overwriteOutput = True
+
+ # Define variables
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{project_folder}\Scratch"
+ scratch_gdb = os.path.join(scratch_folder, "scratch.gdb")
+
+ # Set the workspace environment to local file geodatabase
+ arcpy.env.workspace = project_gdb
+ # Set the scratchWorkspace environment to local file geodatabase
+ arcpy.env.scratchWorkspace = scratch_gdb
+ # Clean-up variables
+ del scratch_folder, scratch_gdb
+
+ arcpy.AddMessage(f"\n{'--Start' * 10}--\n")
+ arcpy.AddMessage(f"Creating Table and Field definitions for: {os.path.basename(project_gdb)}")
+
+ field_definitions = { "Bio_Inc_Dec": {
+ "field_aliasName": "Bio_Inc_Dec",
+ "field_baseName": "Bio_Inc_Dec",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 30,
+ "field_name": "Bio_Inc_Dec",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Bio_Inc_Dec",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Bio_Inc_Dec"
+ }
+ },
+ "CSVFile": {
+ "field_aliasName": "CSV File",
+ "field_baseName": "CSVFile",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 20,
+ "field_name": "CSVFile",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "CSV File",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "CSV File"
+ }
+ },
+ "Category": {
+ "field_aliasName": "Category",
+ "field_baseName": "Category",
+ "field_defaultValue": "null",
+ "field_domain": "MosaicCatalogItemCategoryDomain",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 4,
+ "field_name": "Category",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Integer",
+ "field_attrdef": "Category",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Category"
+ }
+ },
+ "CellSize": {
+ "field_aliasName": "Cell Size",
+ "field_baseName": "CellSize",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 4,
+ "field_name": "CellSize",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Cell Size",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Cell Size"
+ }
+ },
+ "CenterOfGravityDepth": {
+ "field_aliasName": "Center of Gravity Depth",
+ "field_baseName": "CenterOfGravityDepth",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "CenterOfGravityDepth",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Center of Gravity Depth",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Center of Gravity Depth"
+ }
+ },
+ "CenterOfGravityDepthSE": {
+ "field_aliasName": "Center of Gravity Depth Standard Error",
+ "field_baseName": "CenterOfGravityDepthSE",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "CenterOfGravityDepthSE",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Center of Gravity Depth Standard Error",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Center of Gravity Depth Standard Error"
+ }
+ },
+ "CenterOfGravityLatitude": {
+ "field_aliasName": "Center of Gravity Latitude",
+ "field_baseName": "CenterOfGravityLatitude",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "CenterOfGravityLatitude",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Center of Gravity Latitude",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Center of Gravity Latitude"
+ }
+ },
+ "CenterOfGravityLatitudeSE": {
+ "field_aliasName": "Center of Gravity Latitude Standard Error",
+ "field_baseName": "CenterOfGravityLatitudeSE",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "CenterOfGravityLatitudeSE",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Center of Gravity Latitude Standard Error",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Center of Gravity Latitude Standard Error"
+ }
+ },
+ "CenterOfGravityLongitude": {
+ "field_aliasName": "Center of Gravity Longitude",
+ "field_baseName": "CenterOfGravityLongitude",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "CenterOfGravityLongitude",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Center of Gravity Longitude",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Center of Gravity Longitude"
+ }
+ },
+ "CenterOfGravityLongitudeSE": {
+ "field_aliasName": "Center of Gravity Longitude Standard Error",
+ "field_baseName": "CenterOfGravityLongitudeSE",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "CenterOfGravityLongitudeSE",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Center of Gravity Longitude Standard Error",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Center of Gravity Longitude Standard Error"
+ }
+ },
+ "CenterX": {
+ "field_aliasName": "CenterX",
+ "field_baseName": "CenterX",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "CenterX",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "CenterX",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "CenterX"
+ }
+ },
+ "CenterY": {
+ "field_aliasName": "CenterY",
+ "field_baseName": "CenterY",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "CenterY",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "CenterY",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "CenterY"
+ }
+ },
+ "CommonName": {
+ "field_aliasName": "Common Name",
+ "field_baseName": "CommonName",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 40,
+ "field_name": "CommonName",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Common Name",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Common Name"
+ }
+ },
+ "CommonNameSpecies": {
+ "field_aliasName": "Common Name (Species)",
+ "field_baseName": "CommonNameSpecies",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 90,
+ "field_name": "CommonNameSpecies",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Common Name (Species)",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Common Name (Species)"
+ }
+ },
+ "CoreSpecies": {
+ "field_aliasName": "Core Species",
+ "field_baseName": "CoreSpecies",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 5,
+ "field_name": "CoreSpecies",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Core Species",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Core Species"
+ }
+ },
+ "Count": {
+ "field_aliasName": "Count",
+ "field_baseName": "Count",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "false",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "Count",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Count",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Count"
+ }
+ },
+ "DataCitation": {
+ "field_aliasName": "Data Citation",
+ "field_baseName": "DataCitation",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 255,
+ "field_name": "DataCitation",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Data Citation",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Data Citation"
+ }
+ },
+ "DataFilteringNotes": {
+ "field_aliasName": "Data Filtering Notes",
+ "field_baseName": "DataFilteringNotes",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 150,
+ "field_name": "DataFilteringNotes",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Data Filtering Notes",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Data Filtering Notes"
+ }
+ },
+ "DataSource": {
+ "field_aliasName": "Data Source",
+ "field_baseName": "DataSource",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 100,
+ "field_name": "DataSource",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Data Source",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Data Source"
+ }
+ },
+ "DatasetCode": {
+ "field_aliasName": "Dataset Code",
+ "field_baseName": "DatasetCode",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 50,
+ "field_name": "DatasetCode",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Dataset Code",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Dataset Code"
+ }
+ },
+ "DateCode": {
+ "field_aliasName": "Date Code",
+ "field_baseName": "DateCode",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 20,
+ "field_name": "DateCode",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Date Code",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Date Code"
+ }
+ },
+ "Depth": {
+ "field_aliasName": "Depth",
+ "field_baseName": "Depth",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "Depth",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Depth",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Depth"
+ }
+ },
+ "Dimensions": {
+ "field_aliasName": "Dimensions",
+ "field_baseName": "Dimensions",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 10,
+ "field_name": "Dimensions",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Dimensions",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Dimensions"
+ }
+ },
+ "DistributionProjectCode": {
+ "field_aliasName": "Distribution Project Code",
+ "field_baseName": "DistributionProjectCode",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 10,
+ "field_name": "DistributionProjectCode",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Distribution Project Code",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Distribution Project Code"
+ }
+ },
+ "DistributionProjectName": {
+ "field_aliasName": "Distribution Project Name",
+ "field_baseName": "DistributionProjectName",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 60,
+ "field_name": "DistributionProjectName",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Distribution Project Name",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Distribution Project Name"
+ }
+ },
+ "Easting": {
+ "field_aliasName": "Easting",
+ "field_baseName": "Easting",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "Easting",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Easting",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Easting"
+ }
+ },
+ "FeatureClassName": {
+ "field_aliasName": "Feature Class Name",
+ "field_baseName": "FeatureClassName",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 60,
+ "field_name": "FeatureClassName",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Feature Class Name",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Feature Class Name"
+ }
+ },
+ "FeatureServiceName": {
+ "field_aliasName": "Feature Service Name",
+ "field_baseName": "FeatureServiceName",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 60,
+ "field_name": "FeatureServiceName",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Feature Service Name",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Feature Service Name"
+ }
+ },
+ "FeatureServiceTitle": {
+ "field_aliasName": "Feature Service Title",
+ "field_baseName": "FeatureServiceTitle",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 80,
+ "field_name": "FeatureServiceTitle",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Feature Service Title",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Feature Service Title"
+ }
+ },
+ "FilterRegion": {
+ "field_aliasName": "Filter Region",
+ "field_baseName": "FilterRegion",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 25,
+ "field_name": "FilterRegion",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Filter Region",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Filter Region"
+ }
+ },
+ "FilterSubRegion": {
+ "field_aliasName": "Filter Sub-Region",
+ "field_baseName": "FilterSubRegion",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 40,
+ "field_name": "FilterSubRegion",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Filter Sub-Region",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Filter Sub-Region"
+ }
+ },
+ "Frequency": {
+ "field_aliasName": "Frequency",
+ "field_baseName": "Frequency",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 25,
+ "field_name": "Frequency",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Frequency",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Frequency"
+ }
+ },
+ "GearType": {
+ "field_aliasName": "Gear Type",
+ "field_baseName": "GearType",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 150,
+ "field_name": "GearType",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Gear Type",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Gear Type"
+ }
+ },
+ "GeographicArea": {
+ "field_aliasName": "Geographic Area",
+ "field_baseName": "GeographicArea",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 20,
+ "field_name": "GeographicArea",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Geographic Area",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Geographic Area"
+ }
+ },
+ "GroupName": {
+ "field_aliasName": "Group Name",
+ "field_baseName": "GroupName",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 100,
+ "field_name": "GroupName",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Group Name",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Group Name"
+ }
+ },
+ "HaulBin": {
+ "field_aliasName": "Haul Bin",
+ "field_baseName": "HaulBin",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 20,
+ "field_name": "HaulBin",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Haul Bin",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Haul Bin"
+ }
+ },
+ "Haul_Inc_Dec": {
+ "field_aliasName": "Haul_Inc_Dec",
+ "field_baseName": "Haul_Inc_Dec",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 30,
+ "field_name": "Haul_Inc_Dec",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Haul_Inc_Dec",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Haul_Inc_Dec"
+ }
+ },
+ "HaulProportion": {
+ "field_aliasName": "Haul Proportion",
+ "field_baseName": "HaulProportion",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "HaulProportion",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Haul Proportion",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Haul Proportion"
+ }
+ },
+
+
+
+
+ "HighPS": {
+ "field_aliasName": "HighPS",
+ "field_baseName": "HighPS",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "HighPS",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "HighPS",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "HighPS"
+ }
+ },
+ "ID": {
+ "field_aliasName": "ID",
+ "field_baseName": "ID",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 2,
+ "field_name": "ID",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "ID",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "ID"
+ }
+ },
+ "ImageName": {
+ "field_aliasName": "Image Name",
+ "field_baseName": "ImageName",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 100,
+ "field_name": "ImageName",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Image Name",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Image Name"
+ }
+ },
+ "ImageServiceName": {
+ "field_aliasName": "Image Service Name",
+ "field_baseName": "ImageServiceName",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 40,
+ "field_name": "ImageServiceName",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Image Service Name",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Image Service Name"
+ }
+ },
+ "ImageServiceTitle": {
+ "field_aliasName": "Image Service Title",
+ "field_baseName": "ImageServiceTitle",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 60,
+ "field_name": "ImageServiceTitle",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Image Service Title",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Image Service Title"
+ }
+ },
+ "ItemTS": {
+ "field_aliasName": "ItemTS",
+ "field_baseName": "ItemTS",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "ItemTS",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "ItemTS",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "ItemTS"
+ }
+ },
+ "Latitude": {
+ "field_aliasName": "Latitude",
+ "field_baseName": "Latitude",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "Latitude",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Latitude",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Latitude"
+ }
+ },
+ "Longitude": {
+ "field_aliasName": "Longitude",
+ "field_baseName": "Longitude",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "Longitude",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Longitude",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Longitude"
+ }
+ },
+ "LowPS": {
+ "field_aliasName": "LowPS",
+ "field_baseName": "LowPS",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "LowPS",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "LowPS",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "LowPS"
+ }
+ },
+ "ManagementBody": {
+ "field_aliasName": "Management Body",
+ "field_baseName": "ManagementBody",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 20,
+ "field_name": "ManagementBody",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Management Body",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Management Body"
+ }
+ },
+ "ManagementPlan": {
+ "field_aliasName": "Management Plan",
+ "field_baseName": "ManagementPlan",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 90,
+ "field_name": "ManagementPlan",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Management Plan",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Management Plan"
+ }
+ },
+ "MapValue": {
+ "field_aliasName": "Map Value",
+ "field_baseName": "MapValue",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "MapValue",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Map Value",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Map Value"
+ }
+ },
+ "MaxPS": {
+ "field_aliasName": "MaxPS",
+ "field_baseName": "MaxPS",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "MaxPS",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "MaxPS",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "MaxPS"
+ }
+ },
+ "MaximumDepth": {
+ "field_aliasName": "Maximum Depth",
+ "field_baseName": "MaximumDepth",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "MaximumDepth",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Maximum Depth",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Maximum Depth"
+ }
+ },
+ "MaximumLatitude": {
+ "field_aliasName": "Maximum Latitude",
+ "field_baseName": "MaximumLatitude",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "MaximumLatitude",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Maximum Latitude",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Maximum Latitude"
+ }
+ },
+ "MaximumLongitude": {
+ "field_aliasName": "Maximum Longitude",
+ "field_baseName": "MaximumLongitude",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "MaximumLongitude",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Maximum Longitude",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Maximum Longitude"
+ }
+ },
+ "MedianEstimate": {
+ "field_aliasName": "Median Estimate",
+ "field_baseName": "MedianEstimate",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "MedianEstimate",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Median Estimate",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Median Estimate"
+ }
+ },
+ "MinPS": {
+ "field_aliasName": "MinPS",
+ "field_baseName": "MinPS",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "MinPS",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "MinPS",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "MinPS"
+ }
+ },
+ "MinimumDepth": {
+ "field_aliasName": "Minimum Depth",
+ "field_baseName": "MinimumDepth",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "MinimumDepth",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Minimum Depth",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Minimum Depth"
+ }
+ },
+ "MinimumLatitude": {
+ "field_aliasName": "Minimum Latitude",
+ "field_baseName": "MinimumLatitude",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "MinimumLatitude",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Minimum Latitude",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Minimum Latitude"
+ }
+ },
+ "MinimumLongitude": {
+ "field_aliasName": "Minimum Longitude",
+ "field_baseName": "MinimumLongitude",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "MinimumLongitude",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Minimum Longitude",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Minimum Longitude"
+ }
+ },
+ "MosaicName": {
+ "field_aliasName": "Mosaic Name",
+ "field_baseName": "MosaicName",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 20,
+ "field_name": "MosaicName",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Mosaic Name",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Mosaic Name"
+ }
+ },
+ "MosaicTitle": {
+ "field_aliasName": "Mosaic Title",
+ "field_baseName": "MosaicTitle",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 60,
+ "field_name": "MosaicTitle",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Mosaic Title",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Mosaic Title"
+ }
+ },
+ "Name": {
+ "field_aliasName": "Name",
+ "field_baseName": "Name",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 200,
+ "field_name": "Name",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Name",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Name"
+ }
+ },
+ "NetWTCPUE": {
+ "field_aliasName": "Net WTCPUE",
+ "field_baseName": "NetWTCPUE",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "NetWTCPUE",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Net WTCPUE",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Net WTCPUE"
+ }
+ },
+ "Northing": {
+ "field_aliasName": "Northing",
+ "field_baseName": "Northing",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "Northing",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Northing",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Northing"
+ }
+ },
+ "Notes": {
+ "field_aliasName": "Notes",
+ "field_baseName": "Notes",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 40,
+ "field_name": "Notes",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Notes",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Notes"
+ }
+ },
+ "OffsetDepth": {
+ "field_aliasName": "Offset Depth",
+ "field_baseName": "OffsetDepth",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "OffsetDepth",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Offset Depth",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Offset Depth"
+ }
+ },
+ "OffsetLatitude": {
+ "field_aliasName": "Offset Latitude",
+ "field_baseName": "OffsetLatitude",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "OffsetLatitude",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Offset Latitude",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Offset Latitude"
+ }
+ },
+ "OffsetLongitude": {
+ "field_aliasName": "Offset Longitude",
+ "field_baseName": "OffsetLongitude",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "OffsetLongitude",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Offset Longitude",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Offset Longitude"
+ }
+ },
+ "Percentile": {
+ "field_aliasName": "Percentile",
+ "field_baseName": "Percentile",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "Percentile",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Percentile",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Percentile"
+ }
+ },
+ "PercentileBin": {
+ "field_aliasName": "Percentile Bin",
+ "field_baseName": "PercentileBin",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 20,
+ "field_name": "PercentileBin",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Percentile Bin",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Percentile Bin"
+ }
+ },
+ "PointFeatureType": {
+ "field_aliasName": "Point Feature Type",
+ "field_baseName": "PointFeatureType",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 20,
+ "field_name": "PointFeatureType",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Point Feature Type",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Point Feature Type"
+ }
+ },
+ "ProductName": {
+ "field_aliasName": "Product Name",
+ "field_baseName": "ProductName",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 100,
+ "field_name": "ProductName",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Product Name",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Product Name"
+ }
+ },
+ "Raster": {
+ "field_aliasName": "Raster",
+ "field_baseName": "Raster",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 0,
+ "field_name": "Raster",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Raster",
+ "field_attrdef": "Raster",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Raster"
+ }
+ },
+ "Region": {
+ "field_aliasName": "Region",
+ "field_baseName": "Region",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 40,
+ "field_name": "Region",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Region",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Region"
+ }
+ },
+ "SampleID": {
+ "field_aliasName": "Sample ID",
+ "field_baseName": "SampleID",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 20,
+ "field_name": "SampleID",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Sample ID",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Sample ID"
+ }
+ },
+ "Season": {
+ "field_aliasName": "Season",
+ "field_baseName": "Season",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 15,
+ "field_name": "Season",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Season",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Season"
+ }
+ },
+ "Species": {
+ "field_aliasName": "Species",
+ "field_baseName": "Species",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 50,
+ "field_name": "Species",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Species",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Species"
+ }
+ },
+ "SpeciesCommonName": {
+ "field_aliasName": "Species (Common Name)",
+ "field_baseName": "SpeciesCommonName",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 90,
+ "field_name": "SpeciesCommonName",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Species (Common Name)",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Species (Common Name)"
+ }
+ },
+ "StandardError": {
+ "field_aliasName": "Standard Error",
+ "field_baseName": "StandardError",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "StandardError",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Standard Error",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Standard Error"
+ }
+ },
+ "Status": {
+ "field_aliasName": "Status",
+ "field_baseName": "Status",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 10,
+ "field_name": "Status",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Status",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Status"
+ }
+ },
+ "StdTime": {
+ "field_aliasName": "StdTime",
+ "field_baseName": "StdTime",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "StdTime",
+ "field_precision": 1,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Date",
+ "field_attrdef": "StdTime",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "StdTime"
+ }
+ },
+ "Stratum": {
+ "field_aliasName": "Stratum",
+ "field_baseName": "Stratum",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 20,
+ "field_name": "Stratum",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Stratum",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Stratum"
+ }
+ },
+ "StratumArea": {
+ "field_aliasName": "Stratum Area",
+ "field_baseName": "StratumArea",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "StratumArea",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "Stratum Area",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Stratum Area"
+ }
+ },
+ "SummaryProduct": {
+ "field_aliasName": "Summary Product",
+ "field_baseName": "SummaryProduct",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 5,
+ "field_name": "SummaryProduct",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Summary Product",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Summary Product"
+ }
+ },
+ "SurveyName": {
+ "field_aliasName": "Survey Name",
+ "field_baseName": "SurveyName",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 100,
+ "field_name": "SurveyName",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Survey Name",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Survey Name"
+ }
+ },
+ "TableName": {
+ "field_aliasName": "Table Name",
+ "field_baseName": "TableName",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 50,
+ "field_name": "TableName",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Table Name",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Table Name"
+ }
+ },
+ "Tag": {
+ "field_aliasName": "Tag",
+ "field_baseName": "Tag",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 100,
+ "field_name": "Tag",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Tag",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Tag"
+ }
+ },
+ "TaxonomicGroup": {
+ "field_aliasName": "Taxonomic Group",
+ "field_baseName": "TaxonomicGroup",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 80,
+ "field_name": "TaxonomicGroup",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Taxonomic Group",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Taxonomic Group"
+ }
+ },
+ "TotalSpeciesCount": {
+ "field_aliasName": "Total Species Count",
+ "field_baseName": "TotalSpeciesCount",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 4,
+ "field_name": "TotalSpeciesCount",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Total Species Count",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Total Species Count"
+ }
+ },
+ "TransformUnit": {
+ "field_aliasName": "Transform Unit",
+ "field_baseName": "TransformUnit",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 20,
+ "field_name": "TransformUnit",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Transform Unit",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Transform Unit"
+ }
+ },
+ "TrendCategory": {
+ "field_aliasName": "Trend Category",
+ "field_baseName": "TrendCategory",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 40,
+ "field_name": "TrendCategory",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Trend Category",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Trend Category"
+ }
+ },
+ "TypeID": {
+ "field_aliasName": "Raster Type ID",
+ "field_baseName": "TypeID",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 4,
+ "field_name": "TypeID",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Integer",
+ "field_attrdef": "Raster Type ID",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Raster Type ID"
+ }
+ },
+ "Uri": {
+ "field_aliasName": "Uri",
+ "field_baseName": "Uri",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 0,
+ "field_name": "Uri",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Blob",
+ "field_attrdef": "Uri",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Uri"
+ }
+ },
+ "UriHash": {
+ "field_aliasName": "UriHash",
+ "field_baseName": "UriHash",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 50,
+ "field_name": "UriHash",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "UriHash",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "UriHash"
+ }
+ },
+ "Value": {
+ "field_aliasName": "Value",
+ "field_baseName": "Value",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 50,
+ "field_name": "Value",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Value",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Value"
+ }
+ },
+ "Variable": {
+ "field_aliasName": "Variable",
+ "field_baseName": "Variable",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 50,
+ "field_name": "Variable",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Variable",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Variable"
+ }
+ },
+ "WTCPUE": {
+ "field_aliasName": "WTCPUE",
+ "field_baseName": "WTCPUE",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 8,
+ "field_name": "WTCPUE",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Double",
+ "field_attrdef": "WTCPUE",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "WTCPUE"
+ }
+ },
+ "Year": {
+ "field_aliasName": "Year",
+ "field_baseName": "Year",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 4,
+ "field_name": "Year",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ #"field_type": "String",
+ "field_type": "Integer",
+ "field_attrdef": "Year",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Year"
+ }
+ },
+ "Years": {
+ "field_aliasName": "Years",
+ "field_baseName": "Years",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 25,
+ "field_name": "Years",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Years",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Years"
+ }
+ },
+ "ZOrder": {
+ "field_aliasName": "ZOrder",
+ "field_baseName": "ZOrder",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 4,
+ "field_name": "ZOrder",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "Integer",
+ "field_attrdef": "ZOrder",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "ZOrder"
+ }
+ }
+ }
+
+ _Bathymetry = []
+ _Boundary = ["DatasetCode", "Region", "Season", "DistributionProjectCode"]
+ _Datasets = ["DatasetCode", "CSVFile", "TransformUnit", "TableName",
+ "GeographicArea", "CellSize", "PointFeatureType", "FeatureClassName",
+ "Region", "Season", "DateCode", "Status", "DistributionProjectCode",
+ "DistributionProjectName", "SummaryProduct", "FilterRegion",
+ "FilterSubRegion", "FeatureServiceName", "FeatureServiceTitle",
+ "MosaicName", "MosaicTitle", "ImageServiceName",
+ "ImageServiceTitle"]
+ _DisMAP_Survey_Info = ["SurveyName", "Region", "Season", "GearType",
+ "Years", "Frequency", "DataFilteringNotes",
+ "TotalSpeciesCount", "DataSource", "DataCitation"]
+ _Extent_Points = ["Easting", "Northing", "Longitude", "Latitude"]
+ _Fishnet = []
+ #_GLMME = ["DatasetCode", "Region", "SummaryProduct", "Year", "StdTime",
+ # "Species", "WTCPUE", "MapValue", "StandardError", "TransformUnit",
+ # "CommonName", "SpeciesCommonName", "CommonNameSpecies", "Easting",
+ # "Northing", "Latitude", "Longitude", "MedianEstimate", "Depth"]
+ #_GRID_Points = ["DatasetCode", "Region", "SummaryProduct", "Year",
+ # "StdTime", "Species", "WTCPUE", "MapValue", "StandardError",
+ # "TransformUnit", "CommonName", "SpeciesCommonName",
+ # "CommonNameSpecies", "Easting", "Northing", "Latitude",
+ # "Longitude", "MedianEstimate", "Depth"]
+ _IDW = ["DatasetCode", "Region", "Season", "DistributionProjectName",
+ "SummaryProduct", "SampleID", "Year", "StdTime", "Species",
+ "WTCPUE", "MapValue", "TransformUnit", "CommonName",
+ "SpeciesCommonName", "CommonNameSpecies", "CoreSpecies",
+ "Stratum", "StratumArea", "Latitude", "Longitude", "Depth"]
+ _Indicators = ["DatasetCode", "Region", "Season", "DateCode", "Species",
+ "CommonName", "CoreSpecies", "Year", "DistributionProjectName",
+ "DistributionProjectCode", "SummaryProduct",
+ "CenterOfGravityLatitude", "MinimumLatitude",
+ "MaximumLatitude", "OffsetLatitude", "CenterOfGravityLatitudeSE",
+ "CenterOfGravityLongitude", "MinimumLongitude",
+ "MaximumLongitude", "OffsetLongitude", "CenterOfGravityLongitudeSE",
+ "CenterOfGravityDepth", "MinimumDepth", "MaximumDepth",
+ "OffsetDepth", "CenterOfGravityDepthSE"]
+ _LayerSpeciesYearImageName = ["DatasetCode", "Region", "Season", "SummaryProduct",
+ "FilterRegion", "FilterSubRegion", "Species",
+ "CommonName", "SpeciesCommonName",
+ "CommonNameSpecies", "TaxonomicGroup",
+ "ManagementBody", "ManagementPlan",
+ "DistributionProjectName", "CoreSpecies",
+ "Year", "StdTime", "Variable",
+ "Value", "Dimensions", "ImageName"]
+ _Lat_Long = ["Easting", "Northing", "Longitude", "Latitude"]
+ _Latitude = []
+ _Longitude = []
+ _Mosaic = ["Raster", "Name", "MinPS", "MaxPS", "LowPS", "HighPS",
+ "Category", "Tag", "GroupName", "ProductName", "CenterX",
+ "CenterY", "ZOrder", "TypeID", "ItemTS", "UriHash", "Uri",
+ "DatasetCode", "Region", "Season", "Species", "CommonName",
+ "SpeciesCommonName", "CoreSpecies", "Year", "StdTime",
+ "Variable", "Value", "Dimensions"]
+ _Raster_Mask = ["Value", "Count", "ID"]
+ _Region = ["DatasetCode", "Region", "Season", "DistributionProjectCode"]
+ _Sample_Locations = ["DatasetCode", "Region", "Season", "SummaryProduct",
+ "SampleID", "Year", "StdTime", "Species", "WTCPUE",
+ "MapValue", "TransformUnit", "CommonName",
+ "SpeciesCommonName", "CommonNameSpecies", "CoreSpecies",
+ "Stratum", "StratumArea", "Latitude", "Longitude",
+ "Depth"]
+ _Species_Filter = ["Species", "CommonName", "TaxonomicGroup", "FilterRegion",
+ "FilterSubRegion", "ManagementBody", "ManagementPlan", "DistributionProjectName"]
+ _SpeciesPersistenceIndicatorTrend = ["Region", "SurveyName", "Species", "CommonName", "TrendCategory", "Notes", "Haul_Inc_Dec", "Bio_Inc_Dec"]
+ _SpeciesPersistenceIndicatorPercentileBin = ["Region", "SurveyName", "Year", "Species", "CommonName", "PercentileBin", "WTCPUE", "HaulProportion", "HaulBin"]
+
+
+ #datasets_table = arcpy.ListTables("Datasets")[0]
+ #datasets_table_fields = [f.name for f in arcpy.ListFields(datasets_table) if f.type not in ["Geometry", "OID"] and f.name not in ["Shape_Area", "Shape_Length"]]
+
+ data_dictionary = dict()
+ #arcpy.AddMessage(datasets_table_fields)
+ #['DatasetCode', 'CSVFile', 'TransformUnit', 'TableName',
+ # 'GeographicArea', 'CellSize', 'PointFeatureType', 'FeatureClassName',
+ # 'Region', 'Season', 'DateCode', 'Status', 'DistributionProjectCode',
+ # 'DistributionProjectName', 'SummaryProduct', 'FilterRegion',
+ # 'FilterSubRegion', 'FeatureServiceName', 'FeatureServiceTitle',
+ # 'MosaicName', 'MosaicTitle', 'ImageServiceName', 'ImageServiceTitle']
+
+ table_names = ["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW",
+ "HI_IDW", "NBS_IDW", "NEUS_FAL_IDW", "NEUS_SPR_IDW",
+ "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW",
+ "WC_ANN_IDW", "WC_TRI_IDW", "DisMAP_Regions", "Datasets",
+ "LayerSpeciesYearImageName", "Indicators", "Species_Filter",
+ "DisMAP_Survey_Info", "SpeciesPersistenceIndicatorTrend",
+ "SpeciesPersistenceIndicatorPercentileBin",]
+
+ for table_name in table_names:
+ arcpy.AddMessage(table_name)
+ if table_name == "DisMAP_Regions":
+ data_dictionary[table_name] = _Region
+ elif table_name == "Datasets":
+ data_dictionary[table_name] = _Datasets
+ elif table_name == "LayerSpeciesYearImageName":
+ data_dictionary[table_name] = _LayerSpeciesYearImageName
+ elif table_name == "Indicators":
+ data_dictionary[table_name] = _Indicators
+ elif table_name == "Species_Filter":
+ data_dictionary[table_name] = _Species_Filter
+ elif table_name == "DisMAP_Survey_Info":
+ data_dictionary[table_name] = _DisMAP_Survey_Info
+ elif table_name == "SpeciesPersistenceIndicatorPercentileBin":
+ data_dictionary[table_name] = _SpeciesPersistenceIndicatorPercentileBin
+ elif table_name == "SpeciesPersistenceIndicatorTrend":
+ data_dictionary[table_name] = _SpeciesPersistenceIndicatorTrend
+ elif table_name.endswith("_IDW"): # or table_name.endswith("_GLMME"):
+ if table_name.endswith("_IDW"):
+ data_dictionary[table_name] = _IDW
+ data_dictionary[f"{table_name}_Sample_Locations"] = _Sample_Locations
+ data_dictionary[f"{table_name}_Indicators"] = _Indicators
+ #elif table_name.endswith("_GLMME"):
+ # data_dictionary[table_name] = _GLMME
+ # data_dictionary[f"{table_name}_GRID_Points"] = _GRID_Points
+ else:
+ pass
+ data_dictionary[f"{table_name}_Bathymetry"] = _Bathymetry
+ data_dictionary[f"{table_name}_Boundary"] = _Boundary
+ data_dictionary[f"{table_name}_Extent_Points"] = _Extent_Points
+ data_dictionary[f"{table_name}_Fishnet"] = _Fishnet
+ data_dictionary[f"{table_name}_LayerSpeciesYearImageName"] = _LayerSpeciesYearImageName
+ data_dictionary[f"{table_name}_Lat_Long"] = _Lat_Long
+ data_dictionary[f"{table_name}_Latitude"] = _Latitude
+ data_dictionary[f"{table_name}_Longitude"] = _Longitude
+ data_dictionary[f"{table_name}_Mosaic"] = _Mosaic
+ data_dictionary[f"{table_name}_Raster_Mask"] = _Raster_Mask
+ data_dictionary[f"{table_name}_Region"] = _Region
+ else:
+ pass
+ del table_name
+
+## fields = ['DatasetCode', 'DistributionProjectCode']
+
+## with arcpy.da.SearchCursor(datasets_table, fields) as cursor:
+## for row in cursor:
+## DatasetCode = f'{row[0]}'
+## DistributionProjectCode = f'{"_"+row[1] if row[1] is not None and row[1] not in row[0] else ""}'
+## table_name = f'{DatasetCode}{DistributionProjectCode}'
+## arcpy.AddMessage(table_name)
+## if table_name == "DisMAP_Regions":
+## data_dictionary[table_name] = _Region
+## elif table_name == "Datasets":
+## data_dictionary[table_name] = _Datasets
+## elif table_name == "LayerSpeciesYearImageName":
+## data_dictionary[table_name] = _LayerSpeciesYearImageName
+## elif table_name == "Indicators":
+## data_dictionary[table_name] = _Indicators
+## elif table_name == "Species_Filter":
+## data_dictionary[table_name] = _Species_Filter
+## elif table_name == "DisMAP_Survey_Info":
+## data_dictionary[table_name] = _DisMAP_Survey_Info
+## elif table_name == "SpeciesPersistenceIndicatorPercentileBin":
+## data_dictionary[table_name] = _SpeciesPersistenceIndicatorPercentileBin
+## elif table_name == "SpeciesPersistenceIndicatorTrend":
+## data_dictionary[table_name] = _SpeciesPersistenceIndicatorTrend
+## elif table_name.endswith("_IDW") or table_name.endswith("_GLMME"):
+## if table_name.endswith("_IDW"):
+## data_dictionary[table_name] = _IDW
+## data_dictionary[f"{table_name}_Sample_Locations"] = _Sample_Locations
+## data_dictionary[f"{table_name}_Indicators"] = _Indicators
+## elif table_name.endswith("_GLMME"):
+## data_dictionary[table_name] = _GLMME
+## data_dictionary[f"{table_name}_GRID_Points"] = _GRID_Points
+## else:
+## pass
+## data_dictionary[f"{table_name}_Bathymetry"] = _Bathymetry
+## data_dictionary[f"{table_name}_Boundary"] = _Boundary
+## data_dictionary[f"{table_name}_Extent_Points"] = _Extent_Points
+## data_dictionary[f"{table_name}_Fishnet"] = _Fishnet
+## data_dictionary[f"{table_name}_LayerSpeciesYearImageName"] = _LayerSpeciesYearImageName
+## data_dictionary[f"{table_name}_Lat_Long"] = _Lat_Long
+## data_dictionary[f"{table_name}_Latitude"] = _Latitude
+## data_dictionary[f"{table_name}_Longitude"] = _Longitude
+## data_dictionary[f"{table_name}_Mosaic"] = _Mosaic
+## data_dictionary[f"{table_name}_Raster_Mask"] = _Raster_Mask
+## data_dictionary[f"{table_name}_Region"] = _Region
+## else:
+## pass
+## del DistributionProjectCode
+## del DatasetCode
+## del table_name
+## del row
+## del cursor
+
+ for key in sorted(data_dictionary):
+ arcpy.AddMessage(f"Table: {key}")
+ _fields = data_dictionary[key]
+ for _field in _fields:
+ arcpy.AddMessage(f"\t{_field}")
+ del _field
+ del _fields
+ del key
+
+ table_definitions = {k:v for k,v in sorted(data_dictionary.items())}
+ import json
+ # Write to File
+ json_path = rf"{project_folder}\CSV_Data\table_definitions.json"
+ #print(f"project folder: {project_folder}")
+ with open(json_path, 'w') as json_file:
+ json.dump(table_definitions, json_file, indent=4)
+ del json_file
+ del json_path
+ del json
+
+ for table in table_definitions:
+ #arcpy.AddMessage(f"{table}")
+ fields = table_definitions[table]
+ #arcpy.AddMessage(f"\t{type(fields)}")
+ del table
+ for field in fields:
+ #arcpy.AddMessage(f"\t{field}")
+ if field in field_definitions.keys():
+ pass
+ #arcpy.AddMessage(f"\t{field_definitions[field]}")
+ else:
+ pass
+ #arcpy.AddMessage(f"\t\t###--->>> {field} not in _field_definitions")
+ del field
+ del fields
+ del table_definitions
+
+ #for key in sorted(field_definitions):
+ # #arcpy.AddMessage(f"Table: {key}")
+ # _fields = field_definitions[key]
+ # if "attrdef" not in _fields:
+ # field_definitions[key]["field_attrdef"] = field_definitions[key]["field_aliasName"]
+ # if "attrdefs" not in _fields:
+ # field_definitions[key]["field_attrdefs"] = "DisMAP Project GDB Data Dictionary"
+ # if "attrdomv" not in _fields:
+ # field_definitions[key]["field_attrdomv"] = {"udom": f"{field_definitions[key]['field_aliasName']}"}
+ # else:
+ # pass
+ # del _fields
+ # del key
+
+ #for key in sorted(field_definitions):
+ # #arcpy.AddMessage(f"Table: {key}")
+ # _fields = field_definitions[key]
+ # for _field in _fields:
+ # #arcpy.AddMessage(f"\t{_field}")
+ # del _field
+ # del _fields
+ # del key
+
+ import json
+ # Write to File
+ json_path = rf"{project_folder}\CSV_Data\field_definitions.json"
+ with open(json_path, 'w') as json_file:
+ json.dump(field_definitions, json_file, indent=4)
+ del json_file
+ del json_path
+ del json
+
+ del field_definitions
+ del data_dictionary, table_names
+ del _Bathymetry, _Boundary, _Datasets, _DisMAP_Survey_Info,
+ del _Extent_Points, _Fishnet, _IDW, _Indicators,
+ #del _GLMME, _GRID_Points,
+ del _LayerSpeciesYearImageName, _Lat_Long, _Latitude, _Longitude,
+ del _Mosaic, _Raster_Mask, _Region, _Sample_Locations, _Species_Filter
+ del _SpeciesPersistenceIndicatorPercentileBin
+ del _SpeciesPersistenceIndicatorTrend
+
+ # Compact GDB
+ #arcpy.AddMessage(f"\nCompacting: {os.path.basename(project_gdb)}" )
+ arcpy.management.Compact(project_gdb)
+
+ arcpy.AddMessage(f"\n{'--End' * 10}--")
+
+ # Declared Variables
+ del project_folder
+ # Imports
+ # Function parameters
+ del project_gdb
+
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+
+ except SystemExit as se:
+ arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
+ except Exception as e:
+ arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ except: # noqa: E722
+ arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+ finally:
+ pass
+
+if __name__ == "__main__":
+ try:
+
+ project_gdb = arcpy.GetParameterAsText(0)
+ if not project_gdb:
+ project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\February 1 2026\February 1 2026.gdb"
+ else:
+ pass
+
+ script_tool(project_gdb)
+ arcpy.SetParameterAsText(1, "Result")
+
+ del project_gdb
+
+ except SystemExit:
+ pass
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ else:
+ pass
+ finally:
+ pass
+
+
+
+
+
+
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_indicators_table_director.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_indicators_table_director.py
new file mode 100644
index 0000000..4c816a6
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_indicators_table_director.py
@@ -0,0 +1,450 @@
+# -*- coding: utf-8 -*-
+#-------------------------------------------------------------------------------
+# Name: create_indicators_table_director
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 09/03/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+#-------------------------------------------------------------------------------
+import os
+import sys
+import traceback
+
+import arcpy # third-parties second
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ #filename = sys.path[0] + os.sep + f"{os.path.basename(__file__)}"
+ filename = os.path.basename(__file__)
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def director(project_gdb="", Sequential=True, table_names=[]):
+ try:
+ # Imports
+ import dismap_tools
+ from create_indicators_table_worker import preprocessing, worker
+
+ # Test if passed workspace exists, if not sys.exit()
+ if not arcpy.Exists(rf"{project_gdb}"):
+ arcpy.AddError(f"{os.path.basename(project_gdb)} is missing!!")
+ arcpy.AddError(arcpy.GetMessages(2))
+ sys.exit()
+ else:
+ pass
+
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ #project_folder = os.path.dirname(project_gdb)
+ scratch_workspace = rf"{os.path.dirname(project_gdb)}\Scratch\scratch.gdb"
+ scratch_folder = rf"{os.path.dirname(project_gdb)}\Scratch"
+ csv_data_folder = rf"{os.path.dirname(project_gdb)}\CSV_Data"
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+
+ preprocessing(project_gdb=project_gdb, table_names=table_names, clear_folder=True)
+
+ # Sequential Processing
+ if Sequential:
+ arcpy.AddMessage("Sequential Processing")
+ for i in range(0, len(table_names)):
+ arcpy.AddMessage(f"Processing: {table_names[i]}")
+ table_name = table_names[i]
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ try:
+ worker(region_gdb=region_gdb)
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ del region_gdb, table_name
+ del i
+ else:
+ pass
+
+ # Non-Sequential Processing
+ if not Sequential:
+ arcpy.AddMessage("Non-Sequential Processing")
+ # Imports
+ import multiprocessing
+ from time import time, localtime, strftime, sleep, gmtime
+ arcpy.AddMessage("Start multiprocessing using the ArcGIS Pro pythonw.exe.")
+ #Set multiprocessing exe in case we're running as an embedded process, i.e ArcGIS
+ #get_install_path() uses a registry query to figure out 64bit python exe if available
+ multiprocessing.set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
+ # Get CPU count and then take 2 away for other process
+ _processes = multiprocessing.cpu_count() - 2
+ _processes = _processes if len(table_names) >= _processes else len(table_names)
+ arcpy.AddMessage(f"Creating the multiprocessing Pool with {_processes} processes")
+ #Create a pool of workers, keep one cpu free for surfing the net.
+ #Let each worker process only handle 1 task before being restarted (in case of nasty memory leaks)
+ with multiprocessing.Pool(processes=_processes, maxtasksperchild=1) as pool:
+ arcpy.AddMessage("\tPrepare arguments for processing")
+ # Use apply_async so we can handle exceptions gracefully
+ jobs={}
+ for i in range(0, len(table_names)):
+ try:
+ arcpy.AddMessage(f"Processing: {table_names[i]}")
+ table_name = table_names[i]
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ jobs[table_name] = pool.apply_async(worker, [region_gdb])
+ del table_name, region_gdb
+ except: # noqa: E722
+ pool.terminate()
+ traceback.print_exc()
+ sys.exit()
+ del i
+ all_finished = False
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ result_completed = {}
+ while True:
+ all_finished = True
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ arcpy.AddMessage(f"\nStart Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage("Have the workers finished?")
+ finish_time = strftime('%a %b %d %I:%M %p', localtime())
+ time_elapsed = u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time)))
+ arcpy.AddMessage(f"It's {finish_time}\n{time_elapsed}")
+ finish_time = f"{finish_time}.\n\t{time_elapsed}"
+ del time_elapsed
+ for table_name, result in jobs.items():
+ if result.ready():
+ if table_name not in result_completed:
+ result_completed[table_name] = finish_time
+ try:
+ # wait for and get the result from the task
+ result.get()
+ except: # noqa: E722
+ pool.terminate()
+ traceback.print_exc()
+ sys.exit()
+ else:
+ pass
+ arcpy.AddMessage(f"Process {table_name}\n\tFinished on {result_completed[table_name]}")
+ else:
+ all_finished = False
+ arcpy.AddMessage(f"Process {table_name} is running. . .")
+ del table_name, result
+ del elapse_time, end_time, finish_time
+ if all_finished:
+ break
+ sleep(_processes * 7.5)
+ del result_completed
+ del start_time
+ del all_finished
+ arcpy.AddMessage("\tClose the process pool")
+ # close the process pool
+ pool.close()
+ # wait for all tasks to complete and processes to close
+ arcpy.AddMessage("\tWait for all tasks to complete and processes to close")
+ pool.join()
+ # Just in case
+ pool.terminate()
+ del pool
+ del jobs
+ del _processes
+ del time, multiprocessing, localtime, strftime, sleep, gmtime
+ arcpy.AddMessage("\tDone with multiprocessing Pool")
+
+ # Post-Processing
+ arcpy.AddMessage("Post-Processing Begins")
+
+ datasets = list()
+ walk = arcpy.da.Walk(scratch_folder, datatype=["Table", "FeatureClass"])
+ for dirpath, dirnames, filenames in walk:
+ for filename in filenames:
+ datasets.append(os.path.join(dirpath, filename))
+ del filename
+ del dirpath, dirnames, filenames
+ del walk
+ for dataset in datasets:
+ datasets_short_path = f".. {'/'.join(dataset.split(os.sep)[-4:])}"
+ dataset_name = os.path.basename(dataset)
+ region_gdb = os.path.dirname(dataset)
+ arcpy.AddMessage(f"\tDataset: '{dataset_name}'")
+ arcpy.AddMessage(f"\t\tPath: '{datasets_short_path}'")
+ arcpy.AddMessage(f"\t\tRegion GDB: '{os.path.basename(region_gdb)}'")
+ arcpy.management.Copy(dataset, rf"{project_gdb}\{dataset_name}")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ #arcpy.management.Delete(dataset)
+ #arcpy.AddMessage(f"\t\tAlter Fields for: '{dataset}'")
+ #dismap_tools.alter_fields(csv_data_folder, rf"{project_gdb}\{dataset}")
+ del region_gdb, dataset_name, datasets_short_path
+ del dataset
+ del datasets
+
+ arcpy.AddMessage(f"Compacting the {os.path.basename(project_gdb)} GDB")
+ arcpy.management.Compact(project_gdb)
+ arcpy.AddMessage("\t"+arcpy.GetMessages(0).replace("\n", "\n\t"))
+ # Declared Variables
+ del scratch_folder, csv_data_folder, scratch_workspace
+ # Imports
+ del preprocessing, worker, dismap_tools
+ # Function Parameters
+ del project_gdb, Sequential, table_names
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def process_indicator_tables(project_gdb=""):
+ try:
+ # Import
+ from arcpy import metadata as md
+ import dismap_tools
+
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = os.path.join(project_folder, "Scratch")
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
+ csv_data_folder = rf"{project_folder}\CSV_Data"
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ arcpy.management.CreateTable(project_gdb, "Indicators", "", "", "")
+ arcpy.AddMessage("\tCreate Table: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ indicators = rf"{project_gdb}\Indicators"
+
+ dismap_tools.add_fields(csv_data_folder, indicators)
+ #dismap_tools.alter_fields(csv_data_folder, indicators)
+ dismap_tools.import_metadata(csv_data_folder, indicators)
+
+ #in_tables = [it for it in arcpy.ListTables("*_Indicators") if it == "AI_IDW_Indicators"]
+ #in_tables = [it for it in arcpy.ListTables("*_Indicators") if not any(lo in it for lo in ["GFDL", "GLMME"])]
+ in_tables = [it for it in arcpy.ListTables("*_Indicators")]
+
+ if not in_tables:
+ arcpy.AddWarning(f"Indicator Tables are not present in the {os.path.basename(project_gdb)} GDB")
+ else:
+ for in_table in sorted(in_tables):
+ arcpy.AddMessage(f"Table: {in_table}")
+ in_table_path = rf"{project_gdb}\{in_table}"
+ del in_table
+
+ arcpy.AddMessage("\tUpdating field values to replace None with empty string")
+
+ fields = [f.name for f in arcpy.ListFields(in_table_path) if f.type == "String"]
+ #for field in fields:
+ # arcpy.AddMessage(f"\t{field.name}\t{field.type}")
+ # del field
+ # Create update cursor for feature class
+ with arcpy.da.UpdateCursor(in_table_path, fields) as cursor:
+ for row in cursor:
+ #arcpy.AddMessage(row)
+ for field_value in row:
+ #arcpy.AddMessage(field_value)
+ if field_value is None:
+ row[row.index(field_value)] = ""
+ cursor.updateRow(row)
+ del field_value
+ del row
+ del cursor
+ del fields
+
+ fields = [f.name for f in arcpy.ListFields(in_table_path) if f.name == "DateCode"]
+ #for field in fields:
+ # arcpy.AddMessage(f"\t{field.name}\t{field.type}")
+ # del field
+ # Create update cursor for feature class
+ with arcpy.da.UpdateCursor(in_table_path, fields) as cursor:
+ for row in cursor:
+ #arcpy.AddMessage(row)
+ #arcpy.AddMessage(dismap_tools.date_code(row[0]))
+ datecode = dismap_tools.date_code(row[0])
+ #arcpy.AddMessage(datecode)
+ row[0] = datecode
+ cursor.updateRow(row)
+ del datecode
+ del row
+ del cursor
+ del fields
+
+ arcpy.management.Append(inputs=in_table_path, target=indicators, schema_type="TEST", field_mapping="", subtype="")
+ arcpy.AddMessage("\tAppend: {0} {1}\n".format(f"{os.path.basename(in_table_path)}", arcpy.GetMessages(0).replace("\n", '\n\t')))
+
+ del in_table_path
+ # end for loop
+
+ dataset_md = md.Metadata(indicators)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ del dataset_md
+
+ arcpy.AddMessage(f"Compacting the {os.path.basename(project_gdb)} GDB")
+ arcpy.management.Compact(project_gdb)
+ arcpy.AddMessage("\t"+arcpy.GetMessages(0).replace("\n", "\n\t"))
+ # Declared Variables assigned in function
+ del in_tables, indicators
+ del scratch_folder, scratch_workspace, csv_data_folder, project_folder
+ # Imports
+ del dismap_tools, md
+ # Function Parameters
+ del project_gdb
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def script_tool(project_gdb=""):
+ try:
+ # Imports
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+ # Test if passed workspace exists, if not sys.exit()
+ if not arcpy.Exists(project_gdb):
+ sys.exit()(f"{os.path.basename(project_gdb)} is missing!!")
+ else:
+ pass
+
+ try:
+ pass
+ # "AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW", "NEUS_FAL_IDW", "NEUS_SPR_IDW",
+ # "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",
+
+ Test = False
+ if Test:
+ director(project_gdb=project_gdb, Sequential=True, table_names=["AI_IDW", "HI_IDW",])
+ elif not Test:
+ pass
+ director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW",])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",])
+ else:
+ pass
+ del Test
+
+ # Combine Indicator Tables
+ CombineIndicatorTables = True
+ if CombineIndicatorTables:
+ process_indicator_tables(project_gdb=project_gdb)
+ else:
+ pass
+ del CombineIndicatorTables
+
+ except: # noqa: E722
+ traceback.print_exc()
+ sys.exit()
+
+ # Declared Variables
+ # Imports
+ # Function Parameters
+ del project_gdb
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ hours, rem = divmod(end_time-start_time, 3600)
+ minutes, seconds = divmod(rem, 60)
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"End Time: {strftime('%a %b %d %I:%M %p', localtime(end_time))}")
+ arcpy.AddMessage(f"Elapsed Time {int(hours):0>2}:{int(minutes):0>2}:{seconds:05.2f} (H:M:S)")
+ arcpy.AddMessage(f"{'-' * 80}")
+ del hours, rem, minutes, seconds
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == '__main__':
+ try:
+ project_gdb = arcpy.GetParameterAsText(0)
+ if not project_gdb:
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
+ else:
+ pass
+
+ script_tool(project_gdb)
+
+ arcpy.SetParameterAsText(1, "Result")
+
+ del project_gdb
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_indicators_table_worker.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_indicators_table_worker.py
new file mode 100644
index 0000000..fdd1c6e
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_indicators_table_worker.py
@@ -0,0 +1,1071 @@
+# -*- coding: utf-8 -*-
+#-------------------------------------------------------------------------------
+# Name: dev_create_indicators_table_worker
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 09/03/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+#-------------------------------------------------------------------------------
+import os
+import sys
+import traceback
+import inspect
+
+import arcpy # third-parties second
+
+def printRowContent(region_indicators):
+ try:
+ arcpy.AddMessage("Print records in the Region Indicators table")
+
+ fields = [f.name for f in arcpy.ListFields(region_indicators) if f.type not in ['Geometry', 'OID']]
+ with arcpy.da.SearchCursor(region_indicators, fields) as cursor:
+ #with arcpy.da.SearchCursor(region_indicators, fields, "CoreSpecies = 'No'") as cursor:
+ for row in cursor:
+ #arcpy.AddMessage(', '.join((row)))
+
+ DatasetCode = row[0]
+ Region = row[1]
+ Season = row[2]
+ DateCode = row[3]
+ Species = row[4]
+ CommonName = row[5]
+ CoreSpecies = row[6]
+ Year = row[7]
+ #DistributionProjectName = row[8]
+ DistributionProjectCode = row[9]
+ SummaryProduct = row[10]
+ CenterOfGravityLatitude = '' if row[11] is None else f'{row[11]:.1f}'
+ MinimumLatitude = '' if row[12] is None else f'{row[12]:.1f}'
+ MaximumLatitude = '' if row[13] is None else f'{row[13]:.1f}'
+ OffsetLatitude = '' if row[14] is None else f'{row[14]:.1f}'
+ CenterOfGravityLatitudeSE = '' if row[15] is None else f'{row[15]:.1f}'
+ CenterOfGravityLongitude = '' if row[16] is None else f'{row[16]:.1f}'
+ MinimumLongitude = '' if row[17] is None else f'{row[17]:.1f}'
+ MaximumLongitude = '' if row[18] is None else f'{row[18]:.1f}'
+ OffsetLongitude = '' if row[19] is None else f'{row[19]:.1f}'
+ CenterOfGravityLongitudeSE = '' if row[20] is None else f'{row[20]:.1f}'
+ CenterOfGravityDepth = '' if row[21] is None else f'{row[21]:.1f}'
+ MinimumDepth = '' if row[22] is None else f'{row[22]:.1f}'
+ MaximumDepth = '' if row[23] is None else f'{row[23]:.1f}'
+ OffsetDepth = '' if row[24] is None else f'{row[24]:.1f}'
+ CenterOfGravityDepthSE = '' if row[25] is None else f'{row[25]:.1f}'
+
+ #arcpy.AddMessage(DatasetCode, Region, Season, DateCode, Species, CommonName, CoreSpecies, Year, DistributionProjectName, DistributionProjectCode, SummaryProduct, CenterOfGravityLatitude, MinimumLatitude, MaximumLatitude, OffsetLatitude, CenterOfGravityLatitudeSE, CenterOfGravityLongitude, MinimumLongitude, MaximumLongitude, OffsetLongitude, CenterOfGravityLongitudeSE, CenterOfGravityDepth, MinimumDepth, MaximumDepth, OffsetDepth, CenterOfGravityDepthSE)
+ arcpy.AddMessage(f"{DatasetCode}, {Region}, {Season}, {DateCode}, {Species}, {CommonName}, {CoreSpecies}, {Year}, {DistributionProjectCode}, {SummaryProduct}, {CenterOfGravityLatitude}, {MinimumLatitude}, {MaximumLatitude}, {OffsetLatitude}, {CenterOfGravityLatitudeSE}, {CenterOfGravityLongitude}, {MinimumLongitude}, {MaximumLongitude}, {OffsetLongitude}, {CenterOfGravityLongitudeSE}, {CenterOfGravityDepth}, {MinimumDepth}, {MaximumDepth}, {OffsetDepth}, {CenterOfGravityDepthSE}")
+
+ del DatasetCode, Region, Season, DateCode, Species, CommonName
+ #del DistributionProjectName
+ del CoreSpecies, Year
+ del DistributionProjectCode, SummaryProduct, CenterOfGravityLatitude
+ del MinimumLatitude, MaximumLatitude, OffsetLatitude
+ del CenterOfGravityLatitudeSE, CenterOfGravityLongitude
+ del MinimumLongitude, MaximumLongitude, OffsetLongitude
+ del CenterOfGravityLongitudeSE, CenterOfGravityDepth, MinimumDepth
+ del MaximumDepth, OffsetDepth, CenterOfGravityDepthSE
+
+ del row
+ del cursor
+ del fields
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ traceback.print_exc()
+ sys.exit()
+ except arcpy.ExecuteError:
+ arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except SystemExit as se:
+ arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
+ sys.exit()
+ except Exception as e:
+ arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+ finally:
+ pass
+
+def worker(region_gdb=""):
+ try:
+ # Test if passed workspace exists, if not sys.exit()
+ if not arcpy.Exists(rf"{region_gdb}"):
+ arcpy.AddError(f"{os.path.basename(region_gdb)} is missing!!")
+ arcpy.AddError(f"Function: '{inspect.stack()[0][3]}', Line Number: {inspect.stack()[0][2]}")
+ sys.exit()
+ else:
+ pass
+
+ import numpy as np
+ import math
+
+ import dismap_tools
+
+ np.seterr(divide='ignore', invalid='ignore')
+
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ table_name = os.path.basename(region_gdb).replace(".gdb","")
+ scratch_folder = os.path.dirname(region_gdb)
+ project_folder = os.path.dirname(scratch_folder)
+ csv_data_folder = rf"{project_folder}\CSV_Data"
+ image_folder = rf"{project_folder}\Images"
+ scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
+
+ arcpy.AddMessage(f"Table Name: {table_name}\nProject Folder: {os.path.basename(project_folder)}\nScratch Folder: {os.path.basename(scratch_folder)}\n")
+
+ arcpy.env.workspace = region_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ #arcpy.env.compression = "LZ77"
+ #arcpy.env.geographicTransformations = "WGS_1984_(ITRF08)_To_NAD_1983_2011"
+ #arcpy.env.pyramid = "PYRAMIDS -1 BILINEAR LZ77 NO_SKIP"
+ arcpy.env.resamplingMethod = "BILINEAR"
+ arcpy.env.rasterStatistics = "STATISTICS 1 1"
+ #arcpy.env.buildStatsAndRATForTempRaster = True
+
+ arcpy.management.CreateTable(region_gdb, f"{table_name}_Indicators", "", "", "")
+ arcpy.AddMessage("\tCreate Table: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ dismap_tools.add_fields(csv_data_folder, os.path.join(region_gdb, f"{table_name}_Indicators"))
+ #dismap_tools.import_metadata(rf"{region_gdb}\{table_name}_Indicators")
+
+ del csv_data_folder
+
+ arcpy.AddMessage(f"Generating {table_name} Indicators Table")
+
+ # "DatasetCode", "CSVFile", "TransformUnit", "TableName", "GeographicArea",
+ # "CellSize", "PointFeatureType", "FeatureClassName", "Region", "Season",
+ # "DateCode", "Status", "DistributionProjectCode", "DistributionProjectName",
+ # "SummaryProduct", "FilterRegion", "FilterSubRegion", "FeatureServiceName",
+ # "FeatureServiceTitle", "MosaicName", "MosaicTitle", "ImageServiceName",
+ # "ImageServiceTitle"
+
+ arcpy.AddMessage(f"\tGet list of vaules for the {table_name} Indicators table from the Datasets table")
+
+ fields = ["DatasetCode", "TableName", "CellSize", "Region", "Season",
+ "DateCode", "DistributionProjectCode", "DistributionProjectName",
+ "SummaryProduct",]
+ region_list = [row for row in arcpy.da.SearchCursor(rf"{region_gdb}\Datasets", fields, where_clause = f"TableName = '{table_name}'")][0]
+ del fields
+
+ # Assigning variables from items in the chosen table list
+ # ['AI_IDW', 'AI_IDW_Region', 'AI', 'Aleutian Islands', None, 'IDW']
+ datasetcode = region_list[0]
+ table_name = region_list[1]
+ cellsize = region_list[2]
+ region = region_list[3]
+ season = region_list[4]
+ datecode = region_list[5]
+ distributionprojectcode = region_list[6]
+ distributionprojectname = region_list[7]
+ summaryproduct = region_list[8]
+ del region_list
+
+ # Convert the Month Day Year date code to YYYYMMDD
+ #datecode = dismap.date_code(datecode)
+ #arcpy.AddMessage(datecode)
+ #arcpy.AddMessage(dismap.date_code(datecode))
+
+ arcpy.env.cellSize = cellsize
+ del cellsize
+
+ # Region Raster Mask
+ datasetcode_raster_mask = os.path.join(region_gdb, f"{table_name}_Raster_Mask")
+
+ # we need to set the mask and extent of the environment, or the raster and items may not come out correctly.
+ arcpy.env.extent = arcpy.Describe(datasetcode_raster_mask).extent
+ arcpy.env.mask = datasetcode_raster_mask
+ arcpy.env.snapRaster = datasetcode_raster_mask
+ del datasetcode_raster_mask
+
+ # Region Indicators
+ region_indicators = rf"{region_gdb}\{table_name}_Indicators"
+
+ # Region Bathymetry
+ region_bathymetry = rf"{region_gdb}\{table_name}_Bathymetry"
+
+ # Region Latitude
+ region_latitude = rf"{region_gdb}\{table_name}_Latitude"
+
+ # Region Longitude
+ region_longitude = rf"{region_gdb}\{table_name}_Longitude"
+
+ layerspeciesyearimagename = rf"{region_gdb}\{table_name}_LayerSpeciesYearImageName"
+
+ input_rasters = {}
+
+ arcpy.AddMessage("\tCreate a list of input biomass raster path locations")
+
+ #fields = "DatasetCode;Region;Season;Species;CommonName;SpeciesCommonName;CoreSpecies;Year;StdTime;Variable;Value;Dimensions"
+ #fields = fields.split(";")
+ fields = ['ImageName', 'Variable', 'Species', 'CommonName', 'CoreSpecies', 'Year']
+
+ #arcpy.AddMessage(fields)
+ #fields = [f.name for f in arcpy.ListFields(datasetcode_tmp) if f.type not in ['Geometry', 'OID']]
+ with arcpy.da.SearchCursor(layerspeciesyearimagename, fields, where_clause = f"DatasetCode = '{datasetcode}'") as cursor:
+ for row in cursor:
+ image_name, variable, species, commonname, corespecies, year = row[0], row[1], row[2], row[3], row[4], row[5]
+ #if variable not in variables: variables.append(variable)
+ #arcpy.AddMessage(variable, image_name)
+ #variable = f"_{variable}" if "Species Richness" in variable else variable
+ if "Species Richness" not in variable:
+ #arcpy.AddMessage(variable, year)
+ input_raster_path = rf"{image_folder}\{table_name}\{variable}\{image_name}.tif"
+ #arcpy.AddMessage(input_raster_path)
+ #input_rasters[variable, year] = [image_name, variable, species, commonname, corespecies, year, input_raster_path]
+ #input_rasters[variable][year] = [image_name, variable, species, commonname, corespecies, year, input_raster_path]
+ #input_rasters[variable] = {year : image_name}
+ if variable not in input_rasters:
+ input_rasters[variable] = {year : [image_name, variable, species, commonname, corespecies, year, input_raster_path]}
+ else:
+ value = input_rasters[variable]
+ if year not in value:
+ value[year] = [image_name, variable, species, commonname, corespecies, year, input_raster_path]
+ input_rasters[variable] = value
+ del value
+
+ del input_raster_path
+
+ del row, image_name, variable, species, commonname, corespecies, year
+ del cursor
+
+ #arcpy.AddMessage(variables)
+ del fields
+ #del variables
+ del layerspeciesyearimagename
+ del image_folder
+
+ # Start with empty row_values list of list
+ row_values = []
+
+ arcpy.AddMessage("Interate over the species names")
+
+ for variable in sorted(input_rasters):
+
+ first_year = 9999
+
+ raster_years = input_rasters[variable]
+
+ for raster_year in sorted(raster_years):
+
+ image_name, variable, species, commonname, corespecies, year, input_raster_path = raster_years[raster_year]
+
+ PrintRecord = False
+ if PrintRecord:
+ arcpy.AddMessage(f"\t> Image Name: {image_name}")
+ arcpy.AddMessage(f"\t\t> Variable: {variable}")
+ arcpy.AddMessage(f"\t\t> Species: {species}")
+ arcpy.AddMessage(f"\t\t> Common Name: {commonname}")
+ arcpy.AddMessage(f"\t\t> Core Species: {corespecies}")
+ arcpy.AddMessage(f"\t\t> Year: {year}")
+ arcpy.AddMessage(f"\t\t> Output Raster: {os.path.basename(input_raster_path)}")
+ del PrintRecord
+
+ arcpy.AddMessage(f"Processing {image_name} Biomass Raster for year: {raster_year}")
+
+ # Get maximumBiomass value to filter out "zero" rasters
+ maximumBiomass = float(arcpy.management.GetRasterProperties(input_raster_path, "MAXIMUM").getOutput(0))
+
+ arcpy.AddMessage(f"\t> Biomass Raster Maximum: {maximumBiomass}")
+
+ # arcpy.AddMessage(variable, corespecies, first_year, year)
+
+ # If maximumBiomass greater than zero, then process raster
+ if maximumBiomass > 0.0:
+ # Test is for first year
+
+ first_year = year if year < first_year else first_year
+ #arcpy.AddMessage(f"\t{first_year}, {year} {first_year == year}")
+
+ arcpy.AddMessage("\t> Calculating biomassArray")
+
+ biomassArray = arcpy.RasterToNumPyArray(input_raster_path, nodata_to_value=np.nan)
+ biomassArray[biomassArray <= 0.0] = np.nan
+
+ #sumWtCpue = sum of all wtcpue values (get this from input_raster_path stats??)
+ sumBiomassArray = np.nansum(biomassArray)
+
+ arcpy.AddMessage(f"\t> sumBiomassArray: {sumBiomassArray}")
+
+ arcpy.AddMessage(f"\t> biomassArray non-nan count: {np.count_nonzero(~np.isnan(biomassArray))}")
+
+ # ###--->>> Biomass End
+
+ arcpy.AddMessage("\t> Calculating latitudeArray")
+
+ # ###--->>> Latitude Start
+ #CenterOfGravityLatitude = None
+ #MinimumLatitude = None
+ #MaximumLatitude = None
+ #OffsetLatitude = None
+ #CenterOfGravityLatitudeSE = None
+
+ # Latitude
+ latitudeArray = arcpy.RasterToNumPyArray(region_latitude, nodata_to_value=np.nan)
+ #arcpy.AddMessage(latitudeArray.shape)
+ latitudeArray[np.isnan(biomassArray)] = np.nan
+ #arcpy.AddMessage(latitudeArray.shape)
+
+ #arcpy.AddMessage(f"\t\t> latitudeArray non-nan count: {np.count_nonzero(~np.isnan(latitudeArray)):,d}")
+
+ #arcpy.AddMessage(f"\t\t> Latitude Min: {np.nanmin(latitudeArray)}")
+
+ #arcpy.AddMessage(f"\t\t> Latitude Max: {np.nanmax(latitudeArray)}")
+
+ # make the biomass and latitude arrays one dimensional
+
+ flatBiomassArray = biomassArray.flatten()
+
+ flatLatitudeArray = latitudeArray.flatten()
+
+ # latsInds is an array of indexes representing the sort
+
+ latsInds = flatLatitudeArray.argsort()
+
+ # sort biomass and latitude arrays by lat sorted index
+
+ sortedBiomassArray = flatBiomassArray[latsInds]
+ sortedLatitudeArray = flatLatitudeArray[latsInds]
+
+ # calculate the cumulative sum of the sorted biomass values
+
+ sortedBiomassArrayCumSum = np.nancumsum(sortedBiomassArray)
+
+ # quantile is cumulative sum value divided by total biomass
+
+ sortedBiomassArrayQuantile = sortedBiomassArrayCumSum / np.nansum(flatBiomassArray)
+
+ # find the difference between 0.95 and each cumulative sum value ... asbolute value gives the closest distance
+
+ diffArray = np.abs(sortedBiomassArrayQuantile - 0.95)
+
+ # find the index of the smallest difference
+
+ minIndex = diffArray.argmin()
+
+ # get the lat at that index
+
+ #maxLat = sortedLatitudeArray[minIndex]
+ MaximumLatitude = sortedLatitudeArray[minIndex]
+
+ # do the same for 0.05
+
+ diffArray = np.abs(sortedBiomassArrayQuantile - 0.05)
+
+ minIndex = diffArray.argmin()
+
+ #minLat = sortedLatitudeArray[minIndex]
+ MinimumLatitude = sortedLatitudeArray[minIndex]
+
+ del sortedBiomassArrayCumSum, sortedBiomassArrayQuantile
+ del diffArray, minIndex
+ del sortedLatitudeArray, sortedBiomassArray, flatBiomassArray
+ del latsInds, flatLatitudeArray
+
+ weightedLatitudeArray = np.multiply(biomassArray, latitudeArray)
+
+ sumWeightedLatitudeArray = np.nansum(weightedLatitudeArray)
+
+ #arcpy.AddMessage("\t\t> Sum Weighted Latitude: {sumWeightedLatitudeArray}")
+
+ CenterOfGravityLatitude = sumWeightedLatitudeArray / sumBiomassArray
+
+ if year == first_year:
+ first_year_offset_latitude = CenterOfGravityLatitude
+
+ OffsetLatitude = CenterOfGravityLatitude - first_year_offset_latitude
+
+ weightedLatitudeArrayVariance = np.nanvar(weightedLatitudeArray)
+ weightedLatitudeArrayCount = np.count_nonzero(~np.isnan(weightedLatitudeArray))
+
+ CenterOfGravityLatitudeSE = math.sqrt(weightedLatitudeArrayVariance) / math.sqrt(weightedLatitudeArrayCount)
+
+ del weightedLatitudeArrayVariance, weightedLatitudeArrayCount
+
+ #arcpy.AddMessage(f"\t\t> Center of Gravity Latitude: {round(CenterOfGravityLatitude,6)}"
+ arcpy.AddMessage(f"\t\t> Center of Gravity Latitude: {CenterOfGravityLatitude}")
+ arcpy.AddMessage(f"\t\t> Minimum Latitude (5th Percentile): {MinimumLatitude}")
+ arcpy.AddMessage(f"\t\t> Maximum Latitude (95th Percentile): {MaximumLatitude}")
+ arcpy.AddMessage(f"\t\t> Offset Latitude: {OffsetLatitude}")
+ arcpy.AddMessage(f"\t\t> Center of Gravity Latitude Standard Error: {CenterOfGravityLatitudeSE}")
+
+ del latitudeArray, weightedLatitudeArray, sumWeightedLatitudeArray
+
+ # ###--->>> Latitude End
+
+ arcpy.AddMessage("\t> Calculating longitudeArray")
+
+ # ###--->>> Longitude Start
+ #CenterOfGravityLongitude = None
+ #MinimumLongitude = None
+ #MaximumLongitude = None
+ #OffsetLongitude = None
+ #CenterOfGravityLongitudeSE = None
+
+ # For issue of international date line
+ # Added/Modified by JFK June 15, 2022
+ longitudeArray = arcpy.RasterToNumPyArray(region_longitude, nodata_to_value=np.nan)
+
+ longitudeArray = np.mod(longitudeArray, 360.0)
+
+ longitudeArray[np.isnan(biomassArray)] = np.nan
+
+ # make the biomass and latitude arrays one dimensional
+
+ flatBiomassArray = biomassArray.flatten()
+
+ flatLongitudeArray = longitudeArray.flatten()
+
+ # longsInds is an array of indexes representing the sort
+
+ longsInds = flatLongitudeArray.argsort()
+
+ # sort biomass and latitude arrays by long sorted index
+
+ sortedBiomassArray = flatBiomassArray[longsInds]
+ sortedLongitudeArray = flatLongitudeArray[longsInds]
+
+ # calculate the cumulative sum of the sorted biomass values
+
+ sortedBiomassArrayCumSum = np.nancumsum(sortedBiomassArray)
+
+ # quantile is cumulative sum value divided by total biomass
+
+ sortedBiomassArrayQuantile = sortedBiomassArrayCumSum / np.nansum(flatBiomassArray)
+
+ # find the difference between 0.95 and each cumulative sum value ... asbolute value gives the closest distance
+
+ diffArray = np.abs(sortedBiomassArrayQuantile - 0.95)
+
+ # find the index of the smallest difference
+
+ minIndex = diffArray.argmin()
+
+ # get the lat at that index
+
+ MaximumLongitude = sortedLongitudeArray[minIndex]
+
+ # do the same for 0.05
+
+ diffArray = np.abs(sortedBiomassArrayQuantile - 0.05)
+
+ minIndex = diffArray.argmin()
+
+ MinimumLongitude = sortedLongitudeArray[minIndex]
+
+ del sortedBiomassArrayCumSum, sortedBiomassArrayQuantile, diffArray, minIndex
+ del sortedLongitudeArray, sortedBiomassArray, flatBiomassArray
+ del longsInds, flatLongitudeArray
+
+ weightedLongitudeArray = np.multiply(biomassArray, longitudeArray)
+
+ sumWeightedLongitudeArray = np.nansum(weightedLongitudeArray)
+
+ CenterOfGravityLongitude = sumWeightedLongitudeArray / sumBiomassArray
+
+ if year == first_year:
+ first_year_offset_longitude = CenterOfGravityLongitude
+
+ OffsetLongitude = CenterOfGravityLongitude - first_year_offset_longitude
+
+ weightedLongitudeArrayVariance = np.nanvar(weightedLongitudeArray)
+ weightedLongitudeArrayCount = np.count_nonzero(~np.isnan(weightedLongitudeArray))
+
+ CenterOfGravityLongitudeSE = math.sqrt(weightedLongitudeArrayVariance) / math.sqrt(weightedLongitudeArrayCount)
+
+ del weightedLongitudeArrayVariance, weightedLongitudeArrayCount
+
+ # Convert 360 back to 180
+ # Added/Modified by JFK June 15, 2022
+ CenterOfGravityLongitude = np.mod(CenterOfGravityLongitude - 180.0, 360.0) - 180.0
+ MinimumLongitude = np.mod(MinimumLongitude - 180.0, 360.0) - 180.0
+ MaximumLongitude = np.mod(MaximumLongitude - 180.0, 360.0) - 180.0
+
+ #arcpy.AddMessage(f"\t\t> Sum Weighted Longitude: {0}".format(sumWeightedLongitudeArray))
+
+ #arcpy.AddMessage(f"\t\t> Center of Gravity Longitude: {round(CenterOfGravityLongitude,6)}"
+ arcpy.AddMessage(f"\t\t> Center of Gravity Longitude: {CenterOfGravityLongitude}")
+
+ #arcpy.AddMessage(F"\t\t> Center of Gravity Longitude: {np.mod(CenterOfGravityLongitude - 180.0, 360.0) -180.0}")
+
+ arcpy.AddMessage(f"\t\t> Minimum Longitude (5th Percentile): {MinimumLongitude}")
+ arcpy.AddMessage(f"\t\t> Maximum Longitude (95th Percentile): {MaximumLongitude}")
+ arcpy.AddMessage(f"\t\t> Offset Longitude: {OffsetLongitude}")
+ arcpy.AddMessage(f"\t\t> Center of Gravity Longitude Standard Error: {CenterOfGravityLongitudeSE}")
+
+ del longitudeArray, weightedLongitudeArray, sumWeightedLongitudeArray
+
+ # ###--->>> Longitude End
+
+ arcpy.AddMessage("\t> Calculating bathymetryArray")
+
+ # ###--->>> Center of Gravity Depth (Bathymetry) Start
+
+ #CenterOfGravityDepth = None
+ #MinimumDepth = None
+ #MaximumDepth = None
+ #OffsetDepth = None
+ #CenterOfGravityDepthSE = None
+
+ # Bathymetry
+ bathymetryArray = arcpy.RasterToNumPyArray(region_bathymetry, nodata_to_value=np.nan)
+ # If biomass cells are Null, make bathymetry cells Null as well
+ bathymetryArray[np.isnan(biomassArray)] = np.nan
+ # For bathymetry values zero are larger, make zero
+ bathymetryArray[bathymetryArray >= 0.0] = 0.0
+
+ #arcpy.AddMessage("\t\t> bathymetryArray non-nan count: {0}".format(np.count_nonzero(~np.isnan(bathymetryArray)))
+ #arcpy.AddMessage("\t\t> Bathymetry Min: {0}".format(np.nanmin(bathymetryArray))
+ #arcpy.AddMessage("\t\t> Bathymetry Max: {0}".format(np.nanmax(bathymetryArray))
+ # make the biomass and latitude arrays one dimensional
+
+ flatBiomassArray = biomassArray.flatten()
+
+ flatBathymetryArray = bathymetryArray.flatten()
+
+ # bathyInds is an array of indexes representing the sort
+
+ bathyInds = flatBathymetryArray.argsort()
+
+ # sort biomass and latitude arrays by lat sorted index
+
+ sortedBiomassArray = flatBiomassArray[bathyInds]
+ sortedBathymetryArray = flatBathymetryArray[bathyInds]
+
+ # calculate the cumulative sum of the sorted biomass values
+
+ sortedBiomassArrayCumSum = np.nancumsum(sortedBiomassArray)
+
+ # quantile is cumulative sum value divided by total biomass
+
+ sortedBiomassArrayQuantile = sortedBiomassArrayCumSum/np.nansum(flatBiomassArray)
+
+ # find the difference between 0.95 and each cumulative sum
+ # value ... asbolute value gives the closest distance
+
+ diffArray = np.abs(sortedBiomassArrayQuantile - 0.95)
+
+ # find the index of the smallest difference
+
+ minIndex = diffArray.argmin()
+
+ # get the lat at that index
+
+ #maxLat = sortedBathymetryArray[minIndex]
+ MaximumDepth = sortedBathymetryArray[minIndex]
+
+ # do the same for 0.05
+
+ diffArray = np.abs(sortedBiomassArrayQuantile - 0.05)
+
+ minIndex = diffArray.argmin()
+
+ #minLat = sortedBathymetryArray[minIndex]
+ MinimumDepth = sortedBathymetryArray[minIndex]
+
+ del sortedBiomassArrayCumSum, sortedBiomassArrayQuantile, diffArray, minIndex
+ del sortedBathymetryArray, sortedBiomassArray, flatBiomassArray
+ del bathyInds, flatBathymetryArray
+
+ weightedBathymetryArray = np.multiply(biomassArray, bathymetryArray)
+
+ sumWeightedBathymetryArray = np.nansum(weightedBathymetryArray)
+
+ arcpy.AddMessage(f"\t\t> Sum Weighted Bathymetry: {sumWeightedBathymetryArray}")
+
+ CenterOfGravityDepth = sumWeightedBathymetryArray / sumBiomassArray
+
+ if year == first_year:
+ first_year_offset_depth = CenterOfGravityDepth
+
+ OffsetDepth = CenterOfGravityDepth - first_year_offset_depth
+
+ weightedBathymetryArrayVariance = np.nanvar(weightedBathymetryArray)
+ weightedBathymetryArrayCount = np.count_nonzero(~np.isnan(weightedBathymetryArray))
+
+ CenterOfGravityDepthSE = math.sqrt(weightedBathymetryArrayVariance) / math.sqrt(weightedBathymetryArrayCount)
+
+ del weightedBathymetryArrayVariance, weightedBathymetryArrayCount
+
+ arcpy.AddMessage("\t\t> Center of Gravity Depth: {0}".format(CenterOfGravityDepth))
+
+ arcpy.AddMessage("\t\t> Minimum Depth (5th Percentile): {0}".format(MinimumDepth))
+
+ arcpy.AddMessage("\t\t> Maximum Depth (95th Percentile): {0}".format(MaximumDepth))
+
+ arcpy.AddMessage("\t\t> Offset Depth: {0}".format(OffsetDepth))
+
+ arcpy.AddMessage("\t\t> Center of Gravity Depth Standard Error: {0}".format(CenterOfGravityDepthSE))
+
+ del bathymetryArray, weightedBathymetryArray
+ del sumWeightedBathymetryArray
+
+ # ###--->>> Center of Gravity Depth (Bathymetry) End
+
+ # Clean Up
+ del biomassArray, sumBiomassArray
+
+ elif maximumBiomass == 0.0:
+ CenterOfGravityLatitude = None
+ MinimumLatitude = None
+ MaximumLatitude = None
+ OffsetLatitude = None
+ CenterOfGravityLatitudeSE = None
+ CenterOfGravityLongitude = None
+ MinimumLongitude = None
+ MaximumLongitude = None
+ OffsetLongitude = None
+ CenterOfGravityLongitudeSE = None
+ CenterOfGravityDepth = None
+ MinimumDepth = None
+ MaximumDepth = None
+ OffsetDepth = None
+ CenterOfGravityDepthSE = None
+
+ else:
+ arcpy.AddMessage('Something wrong with biomass raster')
+
+
+ arcpy.AddMessage("\t> Assigning variables to row values")
+
+ # Clean-up
+ del maximumBiomass
+
+ # Standard for all records
+ DatasetCode = datasetcode
+ Region = region
+ Season = season
+ DateCode = datecode
+ Species = species
+ CommonName = commonname
+ CoreSpecies = corespecies
+ Year = year
+ DistributionProjectName = distributionprojectname
+ DistributionProjectCode = distributionprojectcode
+ SummaryProduct = summaryproduct
+
+ row = [
+ DatasetCode,
+ Region,
+ Season,
+ DateCode,
+ Species,
+ CommonName,
+ CoreSpecies,
+ Year,
+ DistributionProjectName,
+ DistributionProjectCode,
+ SummaryProduct,
+ CenterOfGravityLatitude,
+ MinimumLatitude,
+ MaximumLatitude,
+ OffsetLatitude,
+ CenterOfGravityLatitudeSE,
+ CenterOfGravityLongitude,
+ MinimumLongitude,
+ MaximumLongitude,
+ OffsetLongitude,
+ CenterOfGravityLongitudeSE,
+ CenterOfGravityDepth,
+ MinimumDepth,
+ MaximumDepth,
+ OffsetDepth,
+ CenterOfGravityDepthSE,
+ ]
+
+ del DatasetCode, Region, Season, DateCode, Species, CommonName
+ del CoreSpecies, Year, DistributionProjectName
+ del DistributionProjectCode, SummaryProduct, CenterOfGravityLatitude
+ del MinimumLatitude, MaximumLatitude, OffsetLatitude
+ del CenterOfGravityLatitudeSE, CenterOfGravityLongitude
+ del MinimumLongitude, MaximumLongitude, OffsetLongitude
+ del CenterOfGravityLongitudeSE, CenterOfGravityDepth, MinimumDepth
+ del MaximumDepth, OffsetDepth, CenterOfGravityDepthSE
+
+ # Append to list
+ row_values.append(row)
+ del row
+
+ del image_name, variable, species, commonname, corespecies, year, input_raster_path
+ del raster_year
+
+ del raster_years
+ del first_year
+
+ if "first_year_offset_latitude" in locals():
+ del first_year_offset_latitude
+ if "first_year_offset_longitude" in locals():
+ del first_year_offset_longitude
+ if "first_year_offset_depth" in locals():
+ del first_year_offset_depth
+
+ del region_bathymetry, region_latitude, region_longitude, input_rasters
+
+ arcpy.AddMessage("Inserting records into the table")
+
+ # This gets a list of fields in the table
+ fields = [f.name for f in arcpy.ListFields(region_indicators) if f.type not in ['Geometry', 'OID']]
+
+ # Open an InsertCursor
+ cursor = arcpy.da.InsertCursor(region_indicators, fields)
+ del fields
+
+ # Insert new rows into the table
+ for row in row_values:
+ try:
+ row = [None if x != x else x for x in row]
+ cursor.insertRow(row)
+ except: # noqa: E722
+ # Get the traceback object
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ # Concatenate information together concerning the error into a message string
+ pymsg = "PYTHON ERRORS:\nTraceback info:\n" + tbinfo + "\nError Info:\n" + str(sys.exc_info()[1])
+ sys.exit()(pymsg)
+ finally:
+ del row
+
+ # Delete cursor object
+ del cursor
+
+ # Delete
+ del row_values
+
+ getcount = arcpy.management.GetCount(region_indicators)[0]
+ arcpy.AddMessage(f'\n> "{os.path.basename(region_indicators)}" has {getcount} records\n')
+ del getcount
+
+ PrintRowContent = False
+ if PrintRowContent:
+ printRowContent(region_indicators)
+ del PrintRowContent
+
+ del region_indicators
+
+ arcpy.management.Delete(rf"{region_gdb}\Datasets")
+ arcpy.management.Delete(rf"{region_gdb}\{table_name}_Bathymetry")
+ arcpy.management.Delete(rf"{region_gdb}\{table_name}_Latitude")
+ arcpy.management.Delete(rf"{region_gdb}\{table_name}_Longitude")
+ arcpy.management.Delete(rf"{region_gdb}\{table_name}_Raster_Mask")
+ arcpy.management.Delete(rf"{region_gdb}\{table_name}_LayerSpeciesYearImageName")
+
+ # Values from Datasets table
+ del datasetcode, region, season, datecode, distributionprojectcode
+ del distributionprojectname, summaryproduct
+ # Declared Variables assigned based on the passed paramater
+ del table_name, scratch_folder, project_folder, scratch_workspace
+ # Imported modules
+ del np, math, dismap_tools
+ # Passed paramater
+ del region_gdb
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ traceback.print_exc()
+ sys.exit()
+ except arcpy.ExecuteError:
+ arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except SystemExit as se:
+ arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
+ sys.exit()
+ except Exception as e:
+ arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+ finally:
+ pass
+
+def preprocessing(project_gdb="", table_names="", clear_folder=True):
+ try:
+ import dismap_tools
+
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic arcpy.env variables
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ # Set varaibales
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{project_folder}\Scratch"
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
+
+ # Clear Scratch Folder
+ #ClearScratchFolder = True
+ #if ClearScratchFolder:
+ if clear_folder:
+ dismap_tools.clear_folder(folder=rf"{os.path.dirname(project_gdb)}\Scratch")
+ else:
+ pass
+ #del ClearScratchFolder
+ del clear_folder
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ del project_folder, scratch_workspace
+
+ if not table_names:
+ table_names = [row[0] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"),
+ "TableName",
+ where_clause = "TableName LIKE '%_IDW'")]
+ else:
+ pass
+
+ for table_name in table_names:
+ arcpy.AddMessage(f"Pre-Processing: {table_name}")
+
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ region_scratch_workspace = os.path.join(scratch_folder, f"{table_name}", "scratch.gdb")
+
+ # Create Scratch Workspace for Region
+ if not arcpy.Exists(region_scratch_workspace):
+ os.makedirs(os.path.join(scratch_folder, table_name))
+ if not arcpy.Exists(region_scratch_workspace):
+ arcpy.AddMessage(f"Create File GDB: '{table_name}'")
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, f"{table_name}"), "scratch")
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del region_scratch_workspace
+ # # # CreateFileGDB
+ arcpy.AddMessage(f"Creating File GDB: '{table_name}'")
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # # # CreateFileGDB
+ # # # Datasets
+ # Process: Make Table View (Make Table View) (management)
+ datasets = rf'{project_gdb}\Datasets'
+ arcpy.AddMessage(f"'{os.path.basename(datasets)}' has {arcpy.management.GetCount(datasets)[0]} records")
+ arcpy.management.Copy(datasets, rf"{region_gdb}\Datasets")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # # # Datasets
+
+ # # # LayerSpeciesYearImageName
+ LayerSpeciesYearImageName = rf"{project_gdb}\{table_name}_LayerSpeciesYearImageName"
+ arcpy.AddMessage(f"The table '{table_name}_LayerSpeciesYearImageName' has {arcpy.management.GetCount(LayerSpeciesYearImageName)[0]} records")
+ arcpy.management.Copy(rf"{project_gdb}\{table_name}_LayerSpeciesYearImageName", rf"{region_gdb}\{table_name}_LayerSpeciesYearImageName")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del LayerSpeciesYearImageName
+ # # # LayerSpeciesYearImageName
+
+ # # # Raster_Mask
+ arcpy.AddMessage(f"Copy Raster Mask for '{table_name}'")
+ arcpy.management.Copy(rf"{project_gdb}\{table_name}_Raster_Mask", rf"{region_gdb}\{table_name}_Raster_Mask")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # # # Raster_Mask
+
+ # # # Bathymetry
+ arcpy.AddMessage(f"Copy Bathymetry for '{table_name}'")
+ arcpy.management.Copy(rf"{project_gdb}\{table_name}_Bathymetry", rf"{region_gdb}\{table_name}_Bathymetry")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # # # Bathymetry
+
+ # # # Latitude
+ arcpy.AddMessage(f"Copy Latitude for '{table_name}'")
+ arcpy.management.Copy(rf"{project_gdb}\{table_name}_Latitude", rf"{region_gdb}\{table_name}_Latitude")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # # # Latitude
+
+ # # # Longitude
+ arcpy.AddMessage(f"Copy Longitude for '{table_name}'")
+ arcpy.management.Copy(rf"{project_gdb}\{table_name}_Longitude", rf"{region_gdb}\{table_name}_Longitude")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # # # Longitude
+
+ # Declared Variables
+ del table_name
+ del datasets
+
+ # Declared Variables
+ del scratch_folder, region_gdb
+ # Imports
+ del dismap_tools
+ # Function Parameters
+ del project_gdb, table_names
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ traceback.print_exc()
+ sys.exit()
+ except arcpy.ExecuteError:
+ arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except SystemExit as se:
+ arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
+ sys.exit()
+ except Exception as e:
+ arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+ finally:
+ pass
+
+def script_tool(project_gdb=""):
+ try:
+ # Imports
+ import dismap_tools
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+
+
+ ## # Set worker parameters
+ ## #table_name = "AI_IDW"
+ ## table_name = "HI_IDW"
+ ## #table_name = "NBS_IDW"
+ ## #table_name = "ENBS_IDW"
+
+ table_names = ["HI_IDW",]
+
+ preprocessing(project_gdb=project_gdb, table_names=table_names, clear_folder=True)
+
+ for table_name in table_names:
+ region_gdb = rf"{os.path.dirname(project_gdb)}\Scratch\{table_name}.gdb"
+ try:
+ pass
+ worker(region_gdb=region_gdb)
+ except SystemExit:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ del table_name, region_gdb
+ del table_names
+
+ # Declared Varaiables
+ # Imports
+ del dismap_tools
+ # Function Parameters
+ del project_gdb
+
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ hours, rem = divmod(end_time-start_time, 3600)
+ minutes, seconds = divmod(rem, 60)
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"End Time: {strftime('%a %b %d %I:%M %p', localtime(end_time))}")
+ arcpy.AddMessage(f"Elapsed Time {int(hours):0>2}:{int(minutes):0>2}:{seconds:05.2f} (H:M:S)")
+ arcpy.AddMessage(f"{'-' * 80}")
+ del hours, rem, minutes, seconds
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ traceback.print_exc()
+ sys.exit()
+ except arcpy.ExecuteError:
+ arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except SystemExit as se:
+ arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
+ sys.exit()
+ except Exception as e:
+ arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+ finally:
+ pass
+
+if __name__ == '__main__':
+ try:
+ project_gdb = arcpy.GetParameterAsText(0)
+ if not project_gdb:
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
+ else:
+ pass
+ script_tool(project_gdb)
+ arcpy.SetParameterAsText(1, "Result")
+ del project_gdb
+ except: # noqa: E722
+ traceback.print_exc()
+ else:
+ pass
+ finally:
+ pass
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_metadata_json_files.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_metadata_json_files.py
new file mode 100644
index 0000000..d7677c7
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_metadata_json_files.py
@@ -0,0 +1,616 @@
+"""
+Script documentation
+
+- Tool parameters are accessed using arcpy.GetParameter() or
+ arcpy.GetParameterAsText()
+- Update derived parameter values using arcpy.SetParameter() or
+ arcpy.SetParameterAsText()
+"""
+import os
+import sys
+import traceback
+import inspect
+
+import arcpy
+
+
+def script_tool(project_gdb=""):
+ """Script code goes below"""
+ try:
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+ project_folder = os.path.dirname(project_gdb)
+ out_data_path = rf"{project_folder}\CSV_Data"
+
+ root_dict = {"Esri" : 0,
+ "dataIdInfo" : 1,
+ "dqInfo" : 2,
+ "distInfo" : 3,
+ "mdContact" : 4,
+ "mdLang" : 5,
+ "mdChar" : 6,
+ "mdDateSt" : 7,
+ "mdHrLv" : 8,
+ "mdHrLvName" : 9,
+ "mdFileID" : 10,
+ "mdParentID" : 11,
+ "mdMaint" : 12,
+ "refSysInfo" : 13,
+ "spatRepInfo" : 14,
+ "spdoinfo" : 15,
+ "spref" : 16,
+ "contInfo" : 17,
+ "dataSetFn" : 18,
+ "eainfo" : 19,
+ "Binary" : 20,
+ }
+
+ import json
+ json_path = rf"{out_data_path}\root_dict.json"
+ # Write to File
+ with open(json_path, 'w') as json_file:
+ json.dump(root_dict, json_file, indent=4)
+ del json_file
+ del root_dict
+ with open(json_path, "r") as json_file:
+ root_dict = json.load(json_file)
+ del json_file
+ arcpy.AddMessage(root_dict)
+ del root_dict
+ del json_path
+ del json
+
+ esri_dict ={"CreaDate" : 0,
+ "CreaTime" : 1,
+ "ArcGISFormat" : 2,
+ "ArcGISstyle" : 3,
+ "ArcGISProfile" : 4,
+ "SyncOnce" : 5,
+ "DataProperties" : 6,
+ "lineage" : 0,
+ "itemProps" : 1,
+ "itemName" : 0,
+ "imsContentType" : 1,
+ "nativeExtBox" : 2,
+ "westBL" : 0,
+ "eastBL" : 1,
+ "southBL" : 2,
+ "northBL" : 3,
+ "exTypeCode" : 4,
+ "itemLocation" : 3,
+ "linkage" : 0,
+ "protocol" : 1,
+ "coordRef" : 4,
+ "type" : 0,
+ "geogcsn" : 1,
+ "csUnits" : 2,
+ "projcsn" : 3,
+ "peXml" : 4,
+ "SyncDate" : 7,
+ "SyncTime" : 8,
+ "ModDate" : 9,
+ "ModTime" : 10,
+ "scaleRange" : 11,
+ "minScale" : 12,
+ "maxScale" : 13,
+ "locales" : 14,
+ }
+
+ import json
+ json_path = rf"{out_data_path}\esri_dict.json"
+ # Write to File
+ with open(json_path, 'w') as json_file:
+ json.dump(esri_dict, json_file, indent=4)
+ del json_file
+ del esri_dict
+ with open(json_path, "r") as json_file:
+ esri_dict = json.load(json_file)
+ del json_file
+ arcpy.AddMessage(esri_dict)
+ del esri_dict
+ del json_path
+ del json
+
+ dataIdInfo_dict = {"dataIdInfo" : 0,
+ "envirDesc" : 0,
+ "dataLang" : 1,
+ "dataChar" : 2,
+ "idCitation" : 3,
+ "resTitle" : 0,
+ "resAltTitle" : 1,
+ "collTitle" : 2,
+ "date" : 3,
+ "presForm" : 4,
+ "PresFormCd" : 0,
+ "fgdcGeoform" : 1,
+ "citRespParty" : 5,
+ "spatRpType" : 4,
+ "dataExt" : 5,
+ "exDesc" : 0,
+ "geoEle" : 1,
+ "GeoBndBox" : 0,
+ "exTypeCode" : 0,
+ "westBL" : 1,
+ "eastBL" : 2,
+ "northBL" : 3,
+ "southBL" : 4,
+ "tempEle" : 2,
+ "TempExtent" : 0,
+ "exTemp" : 0,
+ "TM_Period" : 0,
+ "tmBegin" : 0,
+ "tmEnd" : 1,
+ "TM_Instant" : 1,
+ "tmPosition" : 0,
+ "searchKeys" : 1,
+ "idPurp" : 2,
+ "idAbs" : 3,
+ "idCredit" : 4,
+ "idStatus" : 5,
+ "resConst" : 6,
+ "discKeys" : 7,
+ "keyword" : 0,
+ "thesaName" : 1,
+ "resTitle" : 0, # noqa: F601
+ "date" : 1, # noqa: F601
+ "createDate" : 0,
+ "pubDate" : 1,
+ "reviseDate" : 2,
+ "citOnlineRes" : 2,
+ "linkage" : 0,
+ "orFunct" : 1,
+ "OnFunctCd" : 0,
+ "thesaLang" : 2,
+ "languageCode" : 0,
+ "countryCode" : 1,
+ "themeKeys" : 8, # noqa: F601
+ "keyword" : 0, # noqa: F601
+ "thesaName" : 1, # noqa: F601
+ "resTitle" : 0, # noqa: F601
+ "date" : 1, # noqa: F601
+ "createDate" : 0, # noqa: F601
+ "pubDate" : 1, # noqa: F601
+ "reviseDate" : 2, # noqa: F601
+ "citOnlineRes" : 2, # noqa: F601
+ "linkage" : 0, # noqa: F601
+ "orFunct" : 1, # noqa: F601
+ "OnFunctCd" : 0, # noqa: F601
+ "thesaLang" : 2, # noqa: F601
+ "languageCode" : 0, # noqa: F601
+ "countryCode" : 1, # noqa: F601
+ "placeKeys" : 9,
+ "keyword" : 0, # noqa: F601
+ "thesaName" : 1, # noqa: F601
+ "resTitle" : 0, # noqa: F601
+ "date" : 1, # noqa: F601
+ "createDate" : 0, # noqa: F601
+ "pubDate" : 1, # noqa: F601
+ "reviseDate" : 2, # noqa: F601
+ "citOnlineRes" : 2, # noqa: F601
+ "linkage" : 0, # noqa: F601
+ "orFunct" : 1, # noqa: F601
+ "OnFunctCd" : 0, # noqa: F601
+ "thesaLang" : 2, # noqa: F601
+ "languageCode" : 0, # noqa: F601
+ "countryCode" : 1, # noqa: F601
+ "tempKeys" : 10, # noqa: F601
+ "keyword" : 0, # noqa: F601
+ "thesaName" : 1, # noqa: F601
+ "resTitle" : 0, # noqa: F601
+ "date" : 1, # noqa: F601
+ "createDate" : 0, # noqa: F601
+ "pubDate" : 1, # noqa: F601
+ "reviseDate" : 2, # noqa: F601
+ "citOnlineRes" : 2, # noqa: F601
+ "linkage" : 0, # noqa: F601
+ "orFunct" : 1, # noqa: F601
+ "OnFunctCd" : 0, # noqa: F601
+ "thesaLang" : 2, # noqa: F601
+ "languageCode" : 0, # noqa: F601
+ "countryCode" : 1, # noqa: F601
+ "otherKeys" : 11, # noqa: F601
+ "keyword" : 0, # noqa: F601
+ "thesaName" : 1, # noqa: F601
+ "resTitle" : 0, # noqa: F601
+ "date" : 1, # noqa: F601
+ "createDate" : 0, # noqa: F601
+ "pubDate" : 1, # noqa: F601
+ "reviseDate" : 2, # noqa: F601
+ "citOnlineRes" : 2, # noqa: F601
+ "linkage" : 0, # noqa: F601
+ "orFunct" : 1, # noqa: F601
+ "OnFunctCd" : 0, # noqa: F601
+ "thesaLang" : 2, # noqa: F601
+ "languageCode" : 0, # noqa: F601
+ "countryCode" : 1, # noqa: F601
+ "idPoC" : 11, # noqa: F601
+ "resMaint" : 12, # noqa: F601
+ "tpCat" : 18, # noqa: F601
+ }
+
+ import json
+ json_path = rf"{out_data_path}\dataIdInfo_dict.json"
+ # Write to File
+ with open(json_path, 'w') as json_file:
+ json.dump(dataIdInfo_dict, json_file, indent=4)
+ del json_file
+ del dataIdInfo_dict
+ with open(json_path, "r") as json_file:
+ dataIdInfo_dict = json.load(json_file)
+ del json_file
+ arcpy.AddMessage(dataIdInfo_dict)
+ del dataIdInfo_dict
+ del json_path
+ del json
+
+ idCitation_dict = {"idCitation" : 0,
+ "resTitle" : 0,
+ "resAltTitle" : 1,
+ "collTitle" : 2,
+ "presForm" : 3,
+ "PresFormCd" : 0,
+ "fgdcGeoform" : 1,
+ "date" : 4,
+ "createDate" : 0,
+ "pubDate" : 1,
+ "reviseDate" : 2,
+ "citRespParty" : 6,
+ }
+
+ import json
+ json_path = rf"{out_data_path}\idCitation_dict.json"
+ # Write to File
+ with open(json_path, 'w') as json_file:
+ json.dump(idCitation_dict, json_file, indent=4)
+ del json_file
+ del idCitation_dict
+ with open(json_path, "r") as json_file:
+ idCitation_dict = json.load(json_file)
+ del json_file
+ arcpy.AddMessage(idCitation_dict)
+ del idCitation_dict
+ del json_path
+ del json
+
+ contact_element_order_dict = {"editorSource" : 0, "editorDigest" : 1,"rpIndName" : 2,
+ "rpOrgName" : 3, "rpPosName" : 4, "rpCntInfo" : 5,
+ "cntAddress" : 0, "delPoint" : 0, "city" : 1,
+ "adminArea" : 2, "postCode" : 3, "eMailAdd" : 4,
+ "country" : 5, "cntPhone" : 1, "voiceNum" : 0,
+ "faxNum" : 1, "cntHours" : 2, "cntOnlineRes" : 3,
+ "linkage" : 0, "protocol" : 1, "orName" : 2,
+ "orDesc" : 3, "orFunct" : 4, "OnFunctCd" : 0,
+ "editorSave" : 6, "displayName" : 7, "role" : 8,
+ "RoleCd" : 0, "srcCitatn" : 1, "resTitle" : 0,
+ "resAltTitle" : 1, "collTitle" : 2, "date" : 10,
+ "createDate" : 0, "pubDate" : 1, "reviseDate" : 2,
+ "presForm" : 3, "PresFormCd" : 0, "fgdcGeoform" : 1,
+ "citRespParty" : 6, "citOnlineRes" : 2,
+ }
+
+ import json
+ json_path = rf"{out_data_path}\contact_element_order_dict.json"
+ # Write to File
+ with open(json_path, 'w') as json_file:
+ json.dump(contact_element_order_dict, json_file, indent=4)
+ del json_file
+ del contact_element_order_dict
+ with open(json_path, "r") as json_file:
+ contact_element_order_dict = json.load(json_file)
+ del json_file
+ arcpy.AddMessage(contact_element_order_dict)
+ del contact_element_order_dict
+ del json_path
+ del json
+
+ dqInfo_dict = { "dqScope" : 0,
+ "scpLvl" : 0,
+ "ScopeCd" : 0,
+ "scpLvlDesc" : 1,
+ "datasetSet" : 0,
+ "report" : 1,
+ "measDesc" : 0,
+ "measResult" : 1,
+ "dataLineage" : 3,
+ "statement" : 0,
+ "dataSource" : 1,
+ "srcDesc" : 0,
+ "srcCitatn" : 1,
+ "resTitle" : 0,
+ "resAltTitle" : 1,
+ "collTitle" : 2,
+ "citOnlineRes" : 2,
+ "linkage" : 0,
+ "protocol" : 1,
+ "orName" : 2,
+ "orDesc" : 3,
+ "orFunct" : 4,
+ "OnFunctCd" : 0,
+ "date" : 3,
+ "createDate" : 0,
+ "pubDate" : 1,
+ "reviseDate" : 2,
+ "otherCitDet" : 4,
+ "presForm" : 5,
+ "PresFormCd" : 0,
+ "fgdcGeoform" : 1,
+ "citRespParty" : 6,
+ "editorSource" : 0, "editorDigest" : 1,"rpIndName" : 2,
+ "rpOrgName" : 3, "rpPosName" : 4, "rpCntInfo" : 5,
+ "cntAddress" : 0, "delPoint" : 0, "city" : 1,
+ "adminArea" : 2, "postCode" : 3, "eMailAdd" : 4,
+ "country" : 5, "cntPhone" : 1, "voiceNum" : 0,
+ "faxNum" : 1, "cntHours" : 2, "cntOnlineRes" : 3,
+ "linkage" : 0, "protocol" : 1, "orName" : 2, # noqa: F601
+ "orDesc" : 3, "orFunct" : 4, "OnFunctCd" : 0, # noqa: F601
+ "editorSave" : 6, "displayName" : 7, "role" : 8, # noqa: F601
+ "RoleCd" : 0,
+ "srcMedName" : 7,
+ "MedNameCd" : 0,
+ "prcStep" : 3,
+ "stepDesc" : 0,
+ "stepProc" : 1,
+ "editorSource" : 0, "editorDigest" : 1,"rpIndName" : 2, # noqa: F601
+ "rpOrgName" : 3, "rpPosName" : 4, "rpCntInfo" : 5, # noqa: F601
+ "cntAddress" : 0, "delPoint" : 0, "city" : 1, # noqa: F601
+ "adminArea" : 2, "postCode" : 3, "eMailAdd" : 4, # noqa: F601
+ "country" : 5, "cntPhone" : 1, "voiceNum" : 0, # noqa: F601
+ "faxNum" : 1, "cntHours" : 2, "cntOnlineRes" : 3, # noqa: F601
+ "linkage" : 0, "protocol" : 1, "orName" : 2, # noqa: F601
+ "orDesc" : 3, "orFunct" : 4, "OnFunctCd" : 0, # noqa: F601
+ "editorSave" : 6, "displayName" : 7, "role" : 8, # noqa: F601
+ "RoleCd" : 0, # noqa: F601
+
+ "stepDateTm" : 2,
+ "cntOnlineRes" : 3, "linkage" : 0, # noqa: F601
+ "protocol" : 1, "orName" : 2, "orDesc" : 3, # noqa: F601
+ "orFunct" : 4, "OnFunctCd" : 0, # noqa: F601
+ }
+
+ import json
+ json_path = rf"{out_data_path}\dqInfo_dict.json"
+ # Write to File
+ with open(json_path, 'w') as json_file:
+ json.dump(dqInfo_dict, json_file, indent=4)
+ del json_file
+ del dqInfo_dict
+ with open(json_path, "r") as json_file:
+ dqInfo_dict = json.load(json_file)
+ del json_file
+ arcpy.AddMessage(dqInfo_dict)
+ del dqInfo_dict
+ del json_path
+ del json
+
+ distInfo_dict = {"distInfo" : 0,
+ "distFormat" : 0,
+ "formatName" : 0,
+ "formatVer" : 1,
+ "fileDecmTech" : 2,
+ "formatInfo" : 3,
+ "distributor" : 1,
+ "distorCont" : 0,
+ "editorSource" : 0,
+ "editorDigest" : 1,
+ "rpIndName" : 2,
+ "rpOrgName" : 3,
+ "rpPosName" : 4,
+ "rpCntInfo" : 5,
+ "cntAddress" : 0,
+ "delPoint" : 0,
+ "city" : 1,
+ "adminArea" : 2,
+ "postCode" : 3,
+ "eMailAdd" : 4,
+ "country" : 5,
+ "cntPhone" : 1,
+ "voiceNum" : 0,
+ "faxNum" : 1,
+ "cntHours" : 2,
+ "cntOnlineRes" : 3,
+ "linkage" : 0,
+ "orName" : 1,
+ "orDesc" : 2,
+ "orFunct" : 3,
+ "OnFunctCd" : 0,
+ "editorSave" : 6,
+ "displayName" : 7,
+ "role" : 8,
+ "RoleCd" : 0,
+ "distTranOps" : 2,
+ "unitsODist" : 0,
+ "transSize" : 1,
+ "onLineSrc" : 2,
+ "linkage" : 0, # noqa: F601
+ "protocol" : 1,
+ "orName" : 2, # noqa: F601
+ "orDesc" : 3, # noqa: F601
+ "orFunct" : 4, # noqa: F601
+ "OnFunctCd" : 0, # noqa: F601
+ }
+
+ import json
+ json_path = rf"{out_data_path}\distInfo_dict.json"
+ # Write to File
+ with open(json_path, 'w') as json_file:
+ json.dump(distInfo_dict, json_file, indent=4)
+ del json_file
+ del distInfo_dict
+ with open(json_path, "r") as json_file:
+ distInfo_dict = json.load(json_file)
+ del json_file
+ arcpy.AddMessage(distInfo_dict)
+ del distInfo_dict
+ del json_path
+ del json
+
+ RoleCd_dict = {"001" : "Resource Provider", "002" : "Custodian",
+ "003" : "Owner", "004" : "User",
+ "005" : "Distributor", "006" : "Originator",
+ "007" : "Point of Contact", "008" : "Principal Investigator",
+ "009" : "Processor", "010" : "Publisher",
+ "011" : "Author", "012" : "Collaborator",
+ "013" : "Editor", "014" : "Mediator",
+ "015" : "Rights Holder",}
+
+ import json
+ json_path = rf"{out_data_path}\RoleCd_dict.json"
+ # Write to File
+ with open(json_path, 'w') as json_file:
+ json.dump(RoleCd_dict, json_file, indent=4)
+ del json_file
+ del RoleCd_dict
+ with open(json_path, "r") as json_file:
+ RoleCd_dict = json.load(json_file)
+ del json_file
+ arcpy.AddMessage(RoleCd_dict)
+ del RoleCd_dict
+ del json_path
+ del json
+
+ #role_dict = {"citRespParty" : ,
+ # "idPoC" : ,
+ # "distorCont" : ,
+ # "mdContact" : ,
+ # "stepProc"
+
+
+ tpCat_dict = {"002": '',
+ "007": '',
+ "014": '',}
+
+ import json
+ json_path = rf"{out_data_path}\tpCat_dict.json"
+ # Write to File
+ with open(json_path, 'w') as json_file:
+ json.dump(tpCat_dict, json_file, indent=4)
+ del json_file
+ del tpCat_dict
+ with open(json_path, "r") as json_file:
+ tpCat_dict = json.load(json_file)
+ del json_file
+ arcpy.AddMessage(tpCat_dict)
+ del tpCat_dict
+ del json_path
+ del json
+
+ # ###################### DisMAP ########################################
+ RoleCd_dict = {"001" : "Resource Provider", "002" : "Custodian",
+ "003" : "Owner", "004" : "User",
+ "005" : "Distributor", "006" : "Originator",
+ "007" : "Point of Contact", "008" : "Principal Investigator",
+ "009" : "Processor", "010" : "Publisher",
+ "011" : "Author", "012" : "Collaborator",
+ "013" : "Editor", "014" : "Mediator",
+ "015" : "Rights Holder",}
+ contact_dict = {"citRespParty" : [{"role" : "Custodian", "rpIndName" : "Timothy J Haverland", "eMailAdd" : "tim.haverland@noaa.gov"},],
+ "idPoC" : [{"role" : "Point of Contact", "rpIndName" : "Melissa Ann Karp", "eMailAdd" : "melissa.karp@noaa.gov"},],
+ "distorCont" : [{"role" : "Distributor", "rpIndName" : "Timothy J Haverland", "eMailAdd" : "tim.haverland@noaa.gov"},],
+ "mdContact" : [{"role" : "Author", "rpIndName" : "John F Kennedy", "eMailAdd" : "john.f.kennedy@noaa.gov"},],
+ "srcCitatn" : [{"role" : "Principal Investigator", "rpIndName" : "Melissa Ann Karp", "eMailAdd" : "melissa.karp@noaa.gov"},],
+ "stepProc" : [{"role" : "Processor", "rpIndName" : "John F Kennedy", "eMailAdd" : "john.f.kennedy@noaa.gov"},
+ {"role" : "Processor", "rpIndName" : "Melissa Ann Karp", "eMailAdd" : "melissa.karp@noaa.gov"},
+ ],}
+ del RoleCd_dict
+
+ import json
+ json_path = rf"{out_data_path}\contact_dict.json"
+ #arcpy.AddMessage(json_path)
+ # Write to File
+ with open(json_path, 'w') as json_file:
+ json.dump(contact_dict, json_file, indent=4)
+ del json_file
+ del contact_dict
+ with open(json_path, "r") as json_file:
+ contact_dict = json.load(json_file)
+ del json_file
+ arcpy.AddMessage(contact_dict)
+ del contact_dict
+ del json_path
+ del json
+
+ # ###################### DisMAP ########################################
+
+ # Declared Varaiables
+ del project_folder, out_data_path
+ # Imports
+ # Function Parameters
+ del project_gdb
+
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ hours, rem = divmod(end_time-start_time, 3600)
+ minutes, seconds = divmod(rem, 60)
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"End Time: {strftime('%a %b %d %I:%M %p', localtime(end_time))}")
+ arcpy.AddMessage(f"Elapsed Time {int(hours):0>2}:{int(minutes):0>2}:{seconds:05.2f} (H:M:S)")
+ arcpy.AddMessage(f"{'-' * 80}")
+ del hours, rem, minutes, seconds
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except SystemExit as se:
+ arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
+ sys.exit()
+ except Exception as e:
+ arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+ finally:
+ pass
+
+if __name__ == "__main__":
+ try:
+
+ project_gdb = arcpy.GetParameterAsText(0)
+ if not project_gdb:
+ project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\February 1 2026\February 1 2026.gdb"
+ else:
+ pass
+
+ script_tool(project_gdb=project_gdb)
+ arcpy.SetParameterAsText(1, "Result")
+ del project_gdb
+
+ except SystemExit:
+ pass
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ else:
+ pass
+ finally:
+ sys.exit()
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_mosaics_director.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_mosaics_director.py
new file mode 100644
index 0000000..38542fe
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_mosaics_director.py
@@ -0,0 +1,477 @@
+# -*- coding: utf-8 -*-
+#-------------------------------------------------------------------------------
+# Name: create_species_year_image_name_table_director
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 09/03/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+#-------------------------------------------------------------------------------
+import os
+import sys
+import traceback
+
+import arcpy # third-parties second
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ #filename = sys.path[0] + os.sep + f"{os.path.basename(__file__)}"
+ filename = os.path.basename(__file__)
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def preprocessing(project_gdb="", table_names="", clear_folder=True):
+ try:
+ import dismap_tools
+
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic arcpy.env variables
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ # Set varaibales
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{project_folder}\Scratch"
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
+
+ # Clear Scratch Folder
+ #ClearScratchFolder = True
+ #if ClearScratchFolder:
+ if clear_folder:
+ dismap_tools.clear_folder(folder=rf"{os.path.dirname(project_gdb)}\Scratch")
+ else:
+ pass
+ #del ClearScratchFolder
+ del clear_folder
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ del project_folder, scratch_workspace
+
+ if not table_names:
+ table_names = [row[0] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"),
+ "TableName",
+ where_clause = "TableName LIKE '%_IDW'")]
+ else:
+ pass
+
+ for table_name in table_names:
+ arcpy.AddMessage(f"Pre-Processing: {table_name}")
+
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ region_scratch_workspace = os.path.join(scratch_folder, f"{table_name}", "scratch.gdb")
+
+ # Create Scratch Workspace for Region
+ if not arcpy.Exists(region_scratch_workspace):
+ os.makedirs(os.path.join(scratch_folder, table_name))
+ if not arcpy.Exists(region_scratch_workspace):
+ arcpy.AddMessage(f"Create File GDB: '{table_name}'")
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, f"{table_name}"), "scratch")
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del region_scratch_workspace
+ # # # CreateFileGDB
+ arcpy.AddMessage(f"Creating File GDB: '{table_name}'")
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # # # CreateFileGDB
+ # # # Datasets
+ # Process: Make Table View (Make Table View) (management)
+ datasets = rf'{project_gdb}\Datasets'
+ arcpy.AddMessage(f"'{os.path.basename(datasets)}' has {arcpy.management.GetCount(datasets)[0]} records")
+ arcpy.management.Copy(datasets, rf"{region_gdb}\Datasets")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # # # Datasets
+
+ # # # LayerSpeciesYearImageName
+ LayerSpeciesYearImageName = rf"{project_gdb}\{table_name}_LayerSpeciesYearImageName"
+ arcpy.AddMessage(f"The table '{table_name}_LayerSpeciesYearImageName' has {arcpy.management.GetCount(LayerSpeciesYearImageName)[0]} records")
+ arcpy.management.Copy(rf"{project_gdb}\{table_name}_LayerSpeciesYearImageName", rf"{region_gdb}\{table_name}_LayerSpeciesYearImageName")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del LayerSpeciesYearImageName
+ # # # LayerSpeciesYearImageName
+
+ # # # Raster_Mask
+ arcpy.AddMessage(f"Copy Raster Mask for '{table_name}'")
+ arcpy.management.Copy(rf"{project_gdb}\{table_name}_Raster_Mask", rf"{region_gdb}\{table_name}_Raster_Mask")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # # # Raster_Mask
+
+ del datasets
+ # Declared Variables
+ del table_name
+
+ # Declared Variables
+ del scratch_folder, region_gdb
+ # Imports
+ del dismap_tools
+ # Function Parameters
+ del project_gdb, table_names
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def director(project_gdb="", Sequential=True, table_names=[]):
+ try:
+ # Imports
+ import dismap_tools
+ from create_mosaics_worker import worker
+
+ # Test if passed workspace exists, if not sys.exit()
+ if not arcpy.Exists(rf"{project_gdb}"):
+ arcpy.AddError(f"{os.path.basename(project_gdb)} is missing!!")
+ arcpy.AddError(arcpy.GetMessages(2))
+ sys.exit()
+ else:
+ pass
+
+ # Set History and Metadata logs, set serverity and message level
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic arcpy.env values
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = rf"{os.path.dirname(project_gdb)}\Scratch\scratch.gdb"
+
+ preprocessing(project_gdb=project_gdb, table_names=table_names, clear_folder=True)
+
+ # Set basic workkpace variables
+ scratch_folder = rf"{os.path.dirname(project_gdb)}\Scratch"
+ csv_data_folder = rf"{os.path.dirname(project_gdb)}\CSV_Data"
+
+ # Sequential Processing
+ if Sequential:
+ arcpy.AddMessage("Sequential Processing")
+ for i in range(0, len(table_names)):
+ arcpy.AddMessage(f"Processing: {table_names[i]}")
+ table_name = table_names[i]
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ try:
+ worker(region_gdb=region_gdb)
+ except SystemExit:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ del region_gdb, table_name
+ del i
+ else:
+ pass
+
+ # Non-Sequential Processing
+ if not Sequential:
+ arcpy.AddMessage("Non-Sequential Processing")
+ # Imports
+ import multiprocessing
+ from time import time, localtime, strftime, sleep, gmtime
+ arcpy.AddMessage("Start multiprocessing using the ArcGIS Pro pythonw.exe.")
+ #Set multiprocessing exe in case we're running as an embedded process, i.e ArcGIS
+ #get_install_path() uses a registry query to figure out 64bit python exe if available
+ multiprocessing.set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
+ # Get CPU count and then take 2 away for other process
+ _processes = multiprocessing.cpu_count() - 2
+ _processes = _processes if len(table_names) >= _processes else len(table_names)
+ arcpy.AddMessage(f"Creating the multiprocessing Pool with {_processes} processes")
+ #Create a pool of workers, keep one cpu free for surfing the net.
+ #Let each worker process only handle 1 task before being restarted (in case of nasty memory leaks)
+ with multiprocessing.Pool(processes=_processes, maxtasksperchild=1) as pool:
+ arcpy.AddMessage("\tPrepare arguments for processing")
+ # Use apply_async so we can handle exceptions gracefully
+ jobs={}
+ for i in range(0, len(table_names)):
+ try:
+ arcpy.AddMessage(f"Processing: {table_names[i]}")
+ table_name = table_names[i]
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ jobs[table_name] = pool.apply_async(worker, [region_gdb])
+ del table_name, region_gdb
+ except: # noqa: E722
+ pool.terminate()
+ traceback.print_exc()
+ sys.exit()
+ del i
+ all_finished = False
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ result_completed = {}
+ while True:
+ all_finished = True
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ arcpy.AddMessage(f"\nStart Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage("Have the workers finished?")
+ finish_time = strftime('%a %b %d %I:%M %p', localtime())
+ time_elapsed = u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time)))
+ arcpy.AddMessage(f"It's {finish_time}\n{time_elapsed}")
+ finish_time = f"{finish_time}.\n\t{time_elapsed}"
+ del time_elapsed
+ for table_name, result in jobs.items():
+ if result.ready():
+ if table_name not in result_completed:
+ result_completed[table_name] = finish_time
+ try:
+ # wait for and get the result from the task
+ result.get()
+ except: # noqa: E722
+ pool.terminate()
+ traceback.print_exc()
+ sys.exit()
+ else:
+ pass
+ arcpy.AddMessage(f"Process {table_name}\n\tFinished on {result_completed[table_name]}")
+ else:
+ all_finished = False
+ arcpy.AddMessage(f"Process {table_name} is running. . .")
+ del table_name, result
+ del elapse_time, end_time, finish_time
+ if all_finished:
+ break
+ sleep(_processes * 7.5)
+ del result_completed
+ del start_time
+ del all_finished
+ arcpy.AddMessage("Close the process pool")
+ # close the process pool
+ pool.close()
+ # wait for all tasks to complete and processes to close
+ arcpy.AddMessage("\tWait for all tasks to complete and processes to close")
+ pool.join()
+ # Just in case
+ pool.terminate()
+ del pool
+ del jobs
+ del _processes
+ del time, multiprocessing, localtime, strftime, sleep, gmtime
+ arcpy.AddMessage("Done with multiprocessing Pool\n")
+
+ # Post-Processing
+ arcpy.AddMessage("Post-Processing Begins")
+
+ crf_folder = rf"{os.path.dirname(project_gdb)}\CRFs"
+
+ datasets = list()
+ walk = arcpy.da.Walk(scratch_folder, datatype=["RasterDataset", "MosaicDataset"])
+ for dirpath, dirnames, filenames in walk:
+ for filename in filenames:
+ datasets.append(os.path.join(dirpath, filename))
+ del filename
+ del dirpath, dirnames, filenames
+ del walk
+ for dataset in datasets:
+ datasets_short_path = f".. {'/'.join(dataset.split(os.sep)[-4:])}"
+ dataset_name = os.path.basename(dataset)
+ dataset_type = arcpy.Describe(dataset).datatype
+ region_gdb = os.path.dirname(dataset)
+ arcpy.AddMessage(f"\tDataset: '{dataset_name}'")
+ arcpy.AddMessage(f"\t\tType: '{dataset_type}'")
+ arcpy.AddMessage(f"\t\tPath: '{datasets_short_path}'")
+ arcpy.AddMessage(f"\t\tRegion GDB: '{os.path.basename(region_gdb)}'")
+ if dataset.endswith("Mosaic"):
+ try:
+ if arcpy.Exists(rf"{project_gdb}\{dataset_name}"):
+ arcpy.management.Delete(rf"{project_gdb}\{dataset_name}")
+ else:
+ pass
+ arcpy.AddMessage(f"Copy '{dataset_name}'")
+ arcpy.management.Copy(in_data = dataset,
+ out_data = rf"{project_gdb}\{dataset_name}",
+ data_type = "MosaicDataset",
+ associated_data = "MosaicCatalogItemCategoryDomain 'CV domain' MosaicCatalogItemCategoryDomain DEFAULTS")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ #arcpy.AddMessage(f"\t\tAlter Fields for: '{dataset_name}'")
+ #dismap_tools.alter_fields(csv_data_folder, rf"{project_gdb}\{dataset_name}")
+ dismap_tools.import_metadata(csv_data_folder, rf"{project_gdb}\{dataset_name}")
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+
+ elif dataset.endswith(".crf"):
+ try:
+ if arcpy.Exists(rf"{crf_folder}\{dataset_name}"):
+ arcpy.management.Delete(rf"{crf_folder}\{dataset_name}")
+ else:
+ pass
+ arcpy.AddMessage(f"Copy '{dataset_name}'")
+ arcpy.management.Copy(in_data = dataset,
+ out_data = rf"{crf_folder}\{dataset_name}",
+ data_type = "MosaicDataset",
+ associated_data = "MosaicCatalogItemCategoryDomain 'CV domain' MosaicCatalogItemCategoryDomain DEFAULTS")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ dismap_tools.import_metadata(csv_data_folder, rf"{project_gdb}\{dataset_name}")
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ raise SystemExit
+ else:
+ pass
+ arcpy.management.Delete(dataset)
+ arcpy.AddMessage("\tDelete: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del region_gdb, dataset_name, datasets_short_path, dataset_type
+ del dataset
+ del datasets
+
+ arcpy.AddMessage(f"Compacting the {os.path.basename(project_gdb)} GDB")
+ arcpy.management.Compact(project_gdb)
+ arcpy.AddMessage("\t"+arcpy.GetMessages().replace("\n", "\n\t"))
+ # Declared Variables assigned in function
+ del scratch_folder, csv_data_folder, crf_folder
+ # Imports
+ del worker, dismap_tools
+ # Function Parameters
+ del project_gdb, Sequential, table_names
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def script_tool(project_gdb=""):
+ try:
+ # Imports
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+## # Clear Scratch Folder
+## ClearScratchFolder = False
+## if ClearScratchFolder:
+## import dismap_tools
+## dismap_tools.clear_folder(folder=scratch_folder)
+## del dismap_tools
+## else:
+## pass
+## del ClearScratchFolder
+
+ try:
+ # "AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW", "NEUS_FAL_IDW", "NEUS_SPR_IDW",
+ # "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",
+ Test = False
+ if Test:
+ director(project_gdb=project_gdb, Sequential=True, table_names=["SEUS_FAL_IDW", "HI_IDW", "NBS_IDW",])
+ elif not Test:
+ director(project_gdb=project_gdb, Sequential=True, table_names=["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW",])
+ director(project_gdb=project_gdb, Sequential=True, table_names=["NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",])
+ else:
+ pass
+ del Test
+
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+
+ # Declared Variables
+
+ # Imports
+ # Function Parameters
+ del project_gdb
+
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ hours, rem = divmod(end_time-start_time, 3600)
+ minutes, seconds = divmod(rem, 60)
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"End Time: {strftime('%a %b %d %I:%M %p', localtime(end_time))}")
+ arcpy.AddMessage(f"Elapsed Time {int(hours):0>2}:{int(minutes):0>2}:{seconds:05.2f} (H:M:S)")
+ arcpy.AddMessage(f"{'-' * 80}")
+ del hours, rem, minutes, seconds
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == '__main__':
+ try:
+ project_gdb = arcpy.GetParameterAsText(0)
+ if not project_gdb:
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
+ else:
+ pass
+ script_tool(project_gdb)
+ arcpy.SetParameterAsText(1, "Result")
+ del project_gdb
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_mosaics_worker.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_mosaics_worker.py
new file mode 100644
index 0000000..12fc26c
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_mosaics_worker.py
@@ -0,0 +1,430 @@
+# -*- coding: utf-8 -*-
+#-------------------------------------------------------------------------------
+# Name: create_species_year_image_name_table_worker
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 09/03/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+#-------------------------------------------------------------------------------
+import os
+import sys
+import traceback
+
+import arcpy # third-parties second
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ #filename = sys.path[0] + os.sep + f"{os.path.basename(__file__)}"
+ filename = os.path.basename(__file__)
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def worker(region_gdb=""):
+ try:
+
+ # Set History and Metadata logs, set serverity and message level
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ table_name = os.path.basename(region_gdb).replace(".gdb","")
+ scratch_folder = os.path.dirname(region_gdb)
+ project_folder = os.path.dirname(scratch_folder)
+ scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
+ region_raster_mask = rf"{region_gdb}\{table_name}_Raster_Mask"
+
+ arcpy.AddMessage(f"Table Name: {table_name}\nProject Folder: {os.path.basename(project_folder)}\nScratch Folder: {os.path.basename(scratch_folder)}\n")
+
+ # Set basic workkpace variables
+ arcpy.env.workspace = region_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ #arcpy.env.compression = "LZ77"
+ #arcpy.env.geographicTransformations = "WGS_1984_(ITRF08)_To_NAD_1983_2011"
+ #arcpy.env.pyramid = "PYRAMIDS -1 BILINEAR LZ77 NO_SKIP"
+ arcpy.env.resamplingMethod = "BILINEAR"
+ arcpy.env.rasterStatistics = "STATISTICS 1 1"
+ #arcpy.env.buildStatsAndRATForTempRaster = True
+
+ # DatasetCode, CSVFile, TransformUnit, TableName, GeographicArea, CellSize,
+ # PointFeatureType, FeatureClassName, Region, Season, DateCode, Status,
+ # DistributionProjectCode, DistributionProjectName, SummaryProduct,
+ # FilterRegion, FilterSubRegion, FeatureServiceName, FeatureServiceTitle,
+ # MosaicName, MosaicTitle, ImageServiceName, ImageServiceTitle
+
+ # Get values for table_name from Datasets table
+ fields = ["TableName", "GeographicArea", "DatasetCode", "CellSize", "MosaicName", "MosaicTitle"]
+ region_list = [row for row in arcpy.da.SearchCursor(rf"{region_gdb}\Datasets", fields, where_clause = f"TableName = '{table_name}'")][0]
+ del fields
+
+ # Assigning variables from items in the chosen table list
+ # ['AI_IDW', 'AI_IDW_Region', 'AI', 'Aleutian Islands', None, 'IDW']
+ table_name = region_list[0]
+ #geographic_area = region_list[1]
+ datasetcode = region_list[2]
+ cell_size = region_list[3]
+ mosaic_name = region_list[4]
+ mosaic_title = region_list[5]
+ del region_list
+
+ # Start of business logic for the worker function
+ arcpy.AddMessage(f"Processing: {table_name}")
+
+ #geographic_area_sr = rf"{project_folder}\Dataset_Shapefiles\{table_name}\{geographic_area}.prj"
+ # Set the output coordinate system to what is needed for the
+ # DisMAP project
+ #psr = arcpy.SpatialReference(geographic_area_sr)
+ #arcpy.env.outputCoordinateSystem = psr
+ #del geographic_area_sr, geographic_area
+
+ arcpy.AddMessage("\tSet the 'outputCoordinateSystem' based on the projection information for the geographic region")
+ psr = arcpy.Describe(region_raster_mask).spatialReference
+ arcpy.env.outputCoordinateSystem = psr
+ del region_raster_mask
+
+ arcpy.AddMessage("Building the 'input_raster_paths' list")
+
+ layerspeciesyearimagename = rf"{region_gdb}\{table_name}_LayerSpeciesYearImageName"
+
+ input_raster_paths = []
+
+ fields = ['Variable', 'ImageName']
+ with arcpy.da.SearchCursor(layerspeciesyearimagename, fields, where_clause = f"DatasetCode = '{datasetcode}'") as cursor:
+ for row in cursor:
+ variable, image_name = row[0], row[1]
+ #if variable not in variables: variables.append(variable)
+ #arcpy.AddMessage(f"{variable}, {image_name}")
+ variable = f"_{variable}" if "Species Richness" in variable else variable
+ input_raster_path = rf"{project_folder}\Images\{table_name}\{variable}\{image_name}.tif"
+ if arcpy.Exists(input_raster_path):
+ #arcpy.AddMessage(input_raster_path)
+ input_raster_paths.append(input_raster_path)
+ else:
+ arcpy.AddError(f"{os.path.basename(input_raster_path)} is missing!!")
+ #arcpy.AddMessage(input_raster_path)
+ del row, variable, image_name, input_raster_path
+ del cursor
+ del fields
+
+ mosaic_path = os.path.join(region_gdb, mosaic_name)
+
+ # Loading images into the Mosaic.
+ arcpy.AddMessage(f"Loading the '{table_name}' Mosaic. This may take a while. . . Please wait. . .")
+
+ with arcpy.EnvManager(scratchWorkspace = scratch_workspace, workspace = region_gdb):
+ arcpy.management.CreateMosaicDataset(in_workspace = region_gdb,
+ in_mosaicdataset_name = mosaic_name,
+ coordinate_system = psr,
+ num_bands = "1",
+ pixel_type = "32_BIT_FLOAT",
+ product_definition = "",
+ product_band_definitions = "")
+
+ arcpy.AddMessage("\tCreate Mosaic Dataset: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.AddMessage(f"Loading Rasters into the {os.path.basename(mosaic_path)}.")
+
+ arcpy.management.AddRastersToMosaicDataset(in_mosaic_dataset = mosaic_path,
+ raster_type = "Raster Dataset",
+ input_path = input_raster_paths,
+ update_cellsize_ranges = "UPDATE_CELL_SIZES",
+ #update_cellsize_ranges = "NO_CELL_SIZES",
+ update_boundary = "UPDATE_BOUNDARY",
+ #update_boundary = "NO_BOUNDARY",
+ update_overviews = "NO_OVERVIEWS",
+ maximum_pyramid_levels = None,
+ maximum_cell_size = "0",
+ minimum_dimension = "1500",
+ spatial_reference = psr,
+ filter = "",
+ sub_folder = "NO_SUBFOLDERS",
+ #duplicate_items_action = "OVERWRITE_DUPLICATES",
+ duplicate_items_action = "EXCLUDE_DUPLICATES",
+ build_pyramids = "NO_PYRAMIDS",
+ #calculate_statistics = "CALCULATE_STATISTICS",
+ calculate_statistics = "NO_STATISTICS",
+ #build_thumbnails = "BUILD_THUMBNAILS",
+ build_thumbnails = "NO_THUMBNAILS",
+ operation_description = "DisMAP",
+ #force_spatial_reference= "NO_FORCE_SPATIAL_REFERENCE",
+ force_spatial_reference = "FORCE_SPATIAL_REFERENCE",
+ #estimate_statistics = "ESTIMATE_STATISTICS",
+ estimate_statistics = "NO_STATISTICS",
+ )
+ arcpy.AddMessage("\tAdd Rasters To Mosaic Dataset: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del input_raster_paths
+ del psr
+
+ arcpy.AddMessage(f"Joining {os.path.basename(mosaic_path)} with {os.path.basename(layerspeciesyearimagename)}")
+
+ arcpy.management.JoinField(in_data = mosaic_path, in_field="Name", join_table = layerspeciesyearimagename, join_field="ImageName", fields="DatasetCode;Region;Season;Species;CommonName;SpeciesCommonName;CoreSpecies;Year;StdTime;Variable;Value;Dimensions")
+ arcpy.AddMessage("\tJoin Field: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del layerspeciesyearimagename
+
+ arcpy.AddMessage(f'Removing field index from {os.path.basename(mosaic_path)}')
+
+ try:
+ arcpy.management.RemoveIndex(mosaic_path, [f"{table_name}_MosaicSpeciesIndex",])
+ except: # noqa: E722
+ pass
+
+ arcpy.AddMessage(f"Adding field index to {os.path.basename(mosaic_path)}")
+
+ # Add Attribute Index
+ arcpy.management.AddIndex(mosaic_path, ['Species', 'CommonName', 'SpeciesCommonName', 'Year'], f"{table_name}_MosaicSpeciesIndex", "NON_UNIQUE", "NON_ASCENDING")
+ arcpy.AddMessage("\tAdd Index: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.management.CalculateStatistics(mosaic_path, 1, 1, [], "OVERWRITE", "")
+ arcpy.AddMessage("\tCalculate Statistics: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ #--->>> SetMosaicDatasetProperties
+ arcpy.AddMessage(f"Set Mosaic Dataset Properties for {os.path.basename(mosaic_path)}")
+
+ #fields = [f.name for f in arcpy.ListFields(mosaic_path) if f.type not in ['Geometry', 'OID'] and f.name not in ["Shape", "Raster", "Category", "TypeID", "ItemTS", "UriHash", "Uri",]]
+ fields = [f.name for f in arcpy.ListFields(mosaic_path)]
+
+ fields = ";".join(fields)
+
+ arcpy.management.SetMosaicDatasetProperties(in_mosaic_dataset = mosaic_path,
+ rows_maximum_imagesize = 4100,
+ columns_maximum_imagesize = 15000,
+ allowed_compressions = "LZ77;None",
+ default_compression_type = "LZ77",
+ JPEG_quality = 75,
+ LERC_Tolerance = 0.01,
+ resampling_type = "BILINEAR",
+ clip_to_footprints = "NOT_CLIP",
+ footprints_may_contain_nodata = "FOOTPRINTS_MAY_CONTAIN_NODATA",
+ clip_to_boundary = "CLIP",
+ color_correction = "NOT_APPLY",
+ allowed_mensuration_capabilities = "Basic",
+ default_mensuration_capabilities = "Basic",
+ allowed_mosaic_methods = "None",
+ default_mosaic_method = "None",
+ order_field = "StdTime",
+ order_base = "",
+ sorting_order = "ASCENDING",
+ mosaic_operator = "FIRST",
+ blend_width = 10,
+ view_point_x = 600,
+ view_point_y = 300,
+ max_num_per_mosaic = 50,
+ cell_size_tolerance = 0.8,
+ cell_size = f"{cell_size} {cell_size}",
+ metadata_level = "FULL",
+ transmission_fields = fields,
+ use_time = "ENABLED",
+ start_time_field = "StdTime",
+ end_time_field = "StdTime",
+ time_format = "YYYY", #YYYYMMDD
+ geographic_transform = None,
+ max_num_of_download_items = 20,
+ max_num_of_records_returned = 1000,
+ data_source_type = "GENERIC",
+ minimum_pixel_contribution = 1,
+ processing_templates = "None",
+ default_processing_template = "None",
+ time_interval = 1,
+ time_interval_units = "Years",
+ product_definition = "NONE",
+ product_band_definitions = None
+ )
+ arcpy.AddMessage("\tSet Mosaic Dataset Properties: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del fields
+
+ arcpy.AddMessage(f"Analyze Mosaic {os.path.basename(mosaic_path)} Dataset")
+
+ arcpy.management.AnalyzeMosaicDataset(
+ in_mosaic_dataset = mosaic_path,
+ where_clause = "",
+ checker_keywords = "FOOTPRINT;FUNCTION;RASTER;PATHS;SOURCE_VALIDITY;STALE;PYRAMIDS;STATISTICS;PERFORMANCE;INFORMATION"
+ )
+ arcpy.AddMessage("\tSet Mosaic Dataset Properties: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.AddMessage(f"Adding Multidimensional Information to {os.path.basename(mosaic_path)} Dataset")
+
+ with arcpy.EnvManager(scratchWorkspace = scratch_workspace, workspace = region_gdb):
+ arcpy.md.BuildMultidimensionalInfo(
+ in_mosaic_dataset = mosaic_path,
+ variable_field = "Variable",
+ dimension_fields = [["StdTime", "Time Step", "Year"],],
+ variable_desc_units = None,
+ delete_multidimensional_info = "NO_DELETE_MULTIDIMENSIONAL_INFO"
+ )
+ arcpy.AddMessage("\tBuild Multidimensional Info: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ #arcpy.management.CalculateStatistics(mosaic_path, 1, 1, [], "OVERWRITE", "")
+ #arcpy.AddMessage("\tCalculate Statistics: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ # Copy Raster to CRF
+ crf_path = rf"{scratch_folder}\{table_name}\{mosaic_name.replace('_Mosaic', '')}.crf"
+
+ arcpy.management.CopyRaster(
+ in_raster = mosaic_path,
+ out_rasterdataset = crf_path,
+ config_keyword = "",
+ background_value = None,
+ nodata_value = "-3.40282e+38",
+ onebit_to_eightbit = "NONE",
+ colormap_to_RGB = "NONE",
+ pixel_type = "32_BIT_FLOAT",
+ scale_pixel_value = "NONE",
+ RGB_to_Colormap = "NONE",
+ format = "CRF",
+ transform = None,
+ process_as_multidimensional = "ALL_SLICES",
+ build_multidimensional_transpose = "NO_TRANSPOSE"
+ )
+ arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.AddMessage(f"Calculate Statistics for {os.path.basename(crf_path)}")
+
+ arcpy.management.CalculateStatistics(crf_path, 1, 1, [], "OVERWRITE", "")
+ arcpy.AddMessage("\tCalculate Statistics: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del crf_path
+ del mosaic_path
+
+ # End of business logic for the worker function
+ arcpy.AddMessage(f"Processing for: {table_name} complete")
+
+ arcpy.management.Delete(rf"{region_gdb}\Datasets")
+ arcpy.management.Delete(rf"{region_gdb}\{table_name}_LayerSpeciesYearImageName")
+ arcpy.management.Delete(rf"{region_gdb}\{table_name}_Raster_Mask")
+
+ # Declared Variables for this function only
+ del datasetcode, cell_size, mosaic_name, mosaic_title
+ # Basic variables
+ del table_name, scratch_folder, project_folder, scratch_workspace
+ # Imports
+ # Function parameter
+ del region_gdb
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def script_tool(project_gdb=""):
+ try:
+ # Imports
+ import dismap_tools
+ from create_mosaics_director import preprocessing
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+ ## # Set worker parameters
+ ## #table_name = "AI_IDW"
+ ## table_name = "HI_IDW"
+ ## #table_name = "NBS_IDW"
+ ## #table_name = "ENBS_IDW"
+
+ table_names = ["HI_IDW", "NBS_IDW"]
+
+ preprocessing(project_gdb=project_gdb, table_names=table_names, clear_folder=True)
+
+ for table_name in table_names:
+ region_gdb = rf"{os.path.dirname(project_gdb)}\Scratch\{table_name}.gdb"
+ try:
+ pass
+ worker(region_gdb=region_gdb)
+ except SystemExit:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ del table_name, region_gdb
+ del table_names
+
+ # Declared Varaiables
+ # Imports
+ del dismap_tools
+ # Function Parameters
+ del project_gdb
+
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ hours, rem = divmod(end_time-start_time, 3600)
+ minutes, seconds = divmod(rem, 60)
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"End Time: {strftime('%a %b %d %I:%M %p', localtime(end_time))}")
+ arcpy.AddMessage(f"Elapsed Time {int(hours):0>2}:{int(minutes):0>2}:{seconds:05.2f} (H:M:S)")
+ arcpy.AddMessage(f"{'-' * 80}")
+ del hours, rem, minutes, seconds
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == '__main__':
+ try:
+ project_gdb = arcpy.GetParameterAsText(0)
+
+ if not project_gdb:
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
+ else:
+ pass
+ script_tool(project_gdb)
+
+ arcpy.SetParameterAsText(1, "Result")
+
+ del project_gdb
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_rasters_director.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_rasters_director.py
new file mode 100644
index 0000000..358c9a7
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_rasters_director.py
@@ -0,0 +1,418 @@
+# -*- coding: utf-8 -*-
+#-------------------------------------------------------------------------------
+# Name: create_species_year_image_name_table_director
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 09/03/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+#-------------------------------------------------------------------------------
+import os
+import sys
+import traceback
+
+import arcpy # third-parties second
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + f"{__file__}"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def preprocessing(project_gdb="", table_names="", clear_folder=True):
+ try:
+ import dismap_tools
+
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic arcpy.env variables
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ # Set varaibales
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{project_folder}\Scratch"
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
+
+ # Clear Scratch Folder
+ #ClearScratchFolder = True
+ #if ClearScratchFolder:
+ if clear_folder:
+ dismap_tools.clear_folder(folder=scratch_folder)
+ else:
+ pass
+ #del ClearScratchFolder
+ del clear_folder
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ del project_folder, scratch_workspace
+
+ if not table_names:
+ table_names = [row[0] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"),
+ "TableName",
+ where_clause = "TableName LIKE '%_IDW'")]
+ else:
+ pass
+
+ for table_name in table_names:
+ arcpy.AddMessage(f"Pre-Processing: {table_name}")
+
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ region_scratch_workspace = os.path.join(scratch_folder, f"{table_name}", "scratch.gdb")
+
+ # Create Scratch Workspace for Region
+ if not arcpy.Exists(region_scratch_workspace):
+ os.makedirs(os.path.join(scratch_folder, table_name))
+ if not arcpy.Exists(region_scratch_workspace):
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, f"{table_name}"), "scratch")
+ del region_scratch_workspace
+
+ sample_locations = rf"{table_name}_Sample_Locations"
+
+ arcpy.AddMessage(f"Creating File GDB: {table_name}")
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+
+ # Process: Make Table View (Make Table View) (management)
+ datasets = rf'{project_gdb}\Datasets'
+ arcpy.AddMessage(f"\t{os.path.basename(datasets)} has {arcpy.management.GetCount(datasets)[0]} records")
+
+ table_name_view = "Dataset Table View"
+ arcpy.management.MakeTableView(in_table = datasets,
+ out_view = table_name_view,
+ where_clause = f"TableName = '{table_name}'"
+ )
+ arcpy.AddMessage(f"\tThe table {table_name_view} has {arcpy.management.GetCount(table_name_view)[0]} records")
+ arcpy.management.CopyRows(table_name_view, rf"{region_gdb}\Datasets")
+ arcpy.AddMessage("\tCopy Rows: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ filter_region = [row[0] for row in arcpy.da.SearchCursor(rf"{region_gdb}\Datasets", "FilterRegion")][0].replace("'", "''")
+ filter_subregion = [row[0] for row in arcpy.da.SearchCursor(rf"{region_gdb}\Datasets", "FilterSubRegion")][0].replace("'", "''")
+
+ arcpy.management.Delete(table_name_view)
+ del table_name_view
+
+ arcpy.AddMessage(f"Copying: The table {table_name}_LayerSpeciesYearImageName")
+ arcpy.management.Copy(rf"{project_gdb}\{table_name}_LayerSpeciesYearImageName", rf"{region_gdb}\{table_name}_LayerSpeciesYearImageName")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ #
+ arcpy.AddMessage("Make Feature Layer (Make Feature Layer) (management)")
+ # Process: Make Feature Layer (Make Feature Layer) (management)
+ idw_lyr = arcpy.management.MakeFeatureLayer( in_features = rf"{project_gdb}\{sample_locations}",
+ out_layer = "IDW_Sample_Locations_Layer",
+ where_clause = "DistributionProjectName = 'NMFS/Rutgers IDW Interpolation'",
+ workspace = "",
+ field_info = "OBJECTID OBJECTID VISIBLE NONE;Shape Shape VISIBLE NONE;DatasetCode DatasetCode VISIBLE NONE;Region Region VISIBLE NONE;Season Season VISIBLE NONE;DistributionProjectName DistributionProjectName VISIBLE NONE;SummaryProduct SummaryProduct VISIBLE NONE;SampleID SampleID VISIBLE NONE;Year Year VISIBLE NONE;StdTime StdTime VISIBLE NONE;Species Species VISIBLE NONE;WTCPUE WTCPUE VISIBLE NONE;MapValue MapValue VISIBLE NONE;TransformUnit TransformUnit VISIBLE NONE;CommonName CommonName VISIBLE NONE;SpeciesCommonName SpeciesCommonName VISIBLE NONE;CommonNameSpecies CommonNameSpecies VISIBLE NONE;CoreSpecies CoreSpecies VISIBLE NONE;Stratum Stratum VISIBLE NONE;StratumArea StratumArea VISIBLE NONE;Latitude Latitude VISIBLE NONE;Longitude Longitude VISIBLE NONE;Depth Depth VISIBLE NONE"
+ )
+
+ arcpy.AddMessage("Copy Features (Copy Features) (management)")
+ arcpy.management.CopyFeatures(idw_lyr, rf"{region_gdb}\{sample_locations}")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ #
+ arcpy.AddMessage("Copy")
+ arcpy.management.Copy(rf"{project_gdb}\{table_name}_Raster_Mask", rf"{region_gdb}\{table_name}_Raster_Mask")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.management.Delete(idw_lyr)
+ del idw_lyr
+
+ del sample_locations
+ del region_gdb, table_name
+ del datasets, filter_region, filter_subregion
+ # Declared Variables
+ del scratch_folder
+ # Imports
+ del dismap_tools
+ # Function Parameters
+ del project_gdb, table_names
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def director(project_gdb="", Sequential=True, table_names=[]):
+ try:
+ from create_rasters_worker import worker
+
+ # Test if passed workspace exists, if not sys.exit()
+ if not arcpy.Exists(rf"{project_gdb}"):
+ arcpy.AddError(f"{os.path.basename(project_gdb)} is missing!!")
+ arcpy.AddError(arcpy.GetMessages(2))
+ sys.exit()
+ #sys.exit()
+ else:
+ pass
+
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = os.path.join(project_folder, "Scratch")
+ del project_folder
+
+ preprocessing(project_gdb=project_gdb, table_names=table_names, clear_folder=True)
+
+ # Sequential Processing
+ if Sequential:
+ arcpy.AddMessage("Sequential Processing")
+ for i in range(0, len(table_names)):
+ arcpy.AddMessage(f"Processing: {table_names[i]}")
+ table_name = table_names[i]
+ region_gdb = rf"{os.path.dirname(project_gdb)}\Scratch\{table_name}.gdb"
+ try:
+ pass
+ worker(region_gdb=region_gdb)
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ del region_gdb, table_name
+ del i
+ else:
+ pass
+
+ # Non-Sequential Processing
+ if not Sequential:
+ import multiprocessing
+ from time import time, localtime, strftime, sleep, gmtime
+ arcpy.AddMessage("Start multiprocessing using the ArcGIS Pro pythonw.exe.")
+ #Set multiprocessing exe in case we're running as an embedded process, i.e ArcGIS
+ #get_install_path() uses a registry query to figure out 64bit python exe if available
+ multiprocessing.set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
+ # Get CPU count and then take 2 away for other process
+ _processes = multiprocessing.cpu_count() - 2
+ _processes = _processes if len(table_names) >= _processes else len(table_names)
+ arcpy.AddMessage(f"Creating the multiprocessing Pool with {_processes} processes")
+ #Create a pool of workers, keep one cpu free for surfing the net.
+ #Let each worker process only handle 1 task before being restarted (in case of nasty memory leaks)
+ with multiprocessing.Pool(processes=_processes, maxtasksperchild=1) as pool:
+ arcpy.AddMessage("\tPrepare arguments for processing")
+ # Use apply_async so we can handle exceptions gracefully
+ jobs={}
+ for i in range(0, len(table_names)):
+ try:
+ arcpy.AddMessage(f"Processing: {table_names[i]}")
+ table_name = table_names[i]
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ jobs[table_name] = pool.apply_async(worker, [region_gdb])
+ del table_name, region_gdb
+ except: # noqa: E722
+ pool.terminate()
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ del i
+ all_finished = False
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ result_completed = {}
+ while True:
+ all_finished = True
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ arcpy.AddMessage(f"\nStart Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage("Have the workers finished?")
+ finish_time = strftime('%a %b %d %I:%M %p', localtime())
+ time_elapsed = u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time)))
+ arcpy.AddMessage(f"It's {finish_time}\n{time_elapsed}")
+ finish_time = f"{finish_time}.\n\t{time_elapsed}"
+ del time_elapsed
+ for table_name, result in jobs.items():
+ if result.ready():
+ if table_name not in result_completed:
+ result_completed[table_name] = finish_time
+ try:
+ # wait for and get the result from the task
+ result.get()
+ except: # noqa: E722
+ pool.terminate()
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ pass
+ arcpy.AddMessage(f"Process {table_name}\n\tFinished on {result_completed[table_name]}")
+ else:
+ all_finished = False
+ arcpy.AddMessage(f"Process {table_name} is running. . .")
+ del table_name, result
+ del elapse_time, end_time, finish_time
+ if all_finished:
+ break
+ sleep(_processes * 7.5)
+ del result_completed
+ del start_time
+ del all_finished
+ arcpy.AddMessage("\tClose the process pool")
+ # close the process pool
+ pool.close()
+ # wait for all tasks to complete and processes to close
+ arcpy.AddMessage("\tWait for all tasks to complete and processes to close")
+ pool.join()
+ # Just in case
+ pool.terminate()
+ del pool
+ del jobs
+ del _processes
+ del time, multiprocessing, localtime, strftime, sleep, gmtime
+ arcpy.AddMessage("\tDone with multiprocessing Pool")
+
+ # No Post-Processing
+
+ arcpy.AddMessage(f"Compacting the {os.path.basename(project_gdb)} GDB")
+ arcpy.management.Compact(project_gdb)
+ arcpy.AddMessage("\t"+arcpy.GetMessages(0).replace("\n", "\n\t"))
+
+ # Declared Variables assigned in function
+ del scratch_folder
+
+ # Imports
+ del worker
+
+ # Function Parameters
+ del project_gdb, Sequential, table_names
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def script_tool(project_gdb=""):
+ try:
+ # Imports
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+ try:
+ pass
+ # table_names = ["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW", "NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",]
+
+ Test = False
+ if Test:
+ director(project_gdb=project_gdb, Sequential=True, table_names=["HI_IDW", "AI_IDW",])
+ elif not Test:
+ director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW",])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",])
+ else:
+ pass
+ del Test
+
+ except: # noqa: E722
+ pass
+ #arcpy.AddError(arcpy.GetMessages(2))
+ #traceback.print_exc()
+ #sys.exit()
+
+ # Declared Varaiables
+ # Imports
+
+ # Function Parameters
+ del project_gdb
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ hours, rem = divmod(end_time-start_time, 3600)
+ minutes, seconds = divmod(rem, 60)
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"End Time: {strftime('%a %b %d %I:%M %p', localtime(end_time))}")
+ arcpy.AddMessage(f"Elapsed Time {int(hours):0>2}:{int(minutes):0>2}:{seconds:05.2f} (H:M:S)")
+ arcpy.AddMessage(f"{'-' * 80}")
+ del hours, rem, minutes, seconds
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == '__main__':
+ try:
+ project_gdb = arcpy.GetParameterAsText(0)
+ if not project_gdb:
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
+ else:
+ pass
+ script_tool(project_gdb)
+ arcpy.SetParameterAsText(1, "Result")
+ del project_gdb
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_rasters_worker.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_rasters_worker.py
new file mode 100644
index 0000000..698d66e
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_rasters_worker.py
@@ -0,0 +1,489 @@
+# -*- coding: utf-8 -*-
+#-------------------------------------------------------------------------------
+# Name: create_species_year_image_name_table_worker
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 09/03/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+#-------------------------------------------------------------------------------
+import os
+import sys
+import traceback
+
+import arcpy # third-parties second
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ #filename = sys.path[0] + os.sep + f"{os.path.basename(__file__)}"
+ filename = os.path.basename(__file__)
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def print_table(table=""):
+ try:
+ """ Print first 5 rows of a table """
+ desc = arcpy.da.Describe(table)
+ fields = [f.name for f in desc["fields"] if f.type == "String"]
+ #Get OID field
+ oid = desc["OIDFieldName"]
+ # Use SQL TOP to sort field values
+ arcpy.AddMessage(f"{', '.join(fields)}")
+ for row in arcpy.da.SearchCursor(table, fields, f"{oid} <= 5"):
+ arcpy.AddMessage(row)
+ del row
+ del desc, fields, oid
+ del table
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def worker(region_gdb=""):
+ try:
+ # Test if passed workspace exists, if not sys.exit()
+ if not arcpy.Exists(rf"{region_gdb}"):
+ sys.exit()(f"{os.path.basename(region_gdb)} is missing!!")
+
+ # Import the dismap module to access tools
+ #import dev_dismap_tools
+ #importlib.reload(dev_dismap_tools)
+
+ # Import the worker module to process data
+ # N/A
+
+ # Set History and Metadata logs, set serverity and message level
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic workkpace variables
+ table_name = os.path.basename(region_gdb).replace(".gdb","")
+ scratch_folder = os.path.dirname(region_gdb)
+ project_folder = os.path.dirname(scratch_folder)
+ scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
+
+ arcpy.AddMessage(f"Table Name: {table_name}\nProject Folder: {os.path.basename(project_folder)}\nScratch Folder: {os.path.basename(scratch_folder)}\n")
+
+ # Set basic workkpace variables
+ arcpy.env.workspace = region_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ #arcpy.env.compression = "LZ77"
+ #arcpy.env.geographicTransformations = "WGS_1984_(ITRF08)_To_NAD_1983_2011"
+ arcpy.env.pyramid = "PYRAMIDS -1 BILINEAR LZ77 NO_SKIP"
+ arcpy.env.resamplingMethod = "BILINEAR"
+ arcpy.env.rasterStatistics = "STATISTICS 1 1"
+ #arcpy.env.buildStatsAndRATForTempRaster = True
+
+ # DatasetCode, CSVFile, TransformUnit, TableName, GeographicArea, CellSize,
+ # PointFeatureType, FeatureClassName, Region, Season, DateCode, Status,
+ # DistributionProjectCode, DistributionProjectName, SummaryProduct,
+ # FilterRegion, FilterSubRegion, FeatureServiceName, FeatureServiceTitle,
+ # MosaicName, MosaicTitle, ImageServiceName, ImageServiceTitle
+
+ # Get values for table_name from Datasets table
+ fields = ["TableName", "GeographicArea", "DatasetCode", "CellSize", "Region", "Season", "DistributionProjectCode", "SummaryProduct"]
+ region_list = [row for row in arcpy.da.SearchCursor(rf"{region_gdb}\Datasets", fields, where_clause = f"TableName = '{table_name}'")][0]
+ del fields
+
+ # Assigning variables from items in the chosen table list
+ # ['AI_IDW', 'AI_IDW_Region', 'AI', 'Aleutian Islands', None, 'IDW']
+ table_name = region_list[0]
+ #geographic_area = region_list[1]
+ datasetcode = region_list[2]
+ cell_size = region_list[3] if not isinstance(region_list[3], type("str")) else int(region_list[3])
+ region = region_list[4]
+ season = region_list[5]
+ distri_code = region_list[6]
+ summary_product = region_list[7]
+ del region_list
+
+ #print(cell_size)
+ #print(type(cell_size))
+ #sys.exit()
+ arcpy.env.cellSize = cell_size
+
+ # Start of business logic for the worker function
+ arcpy.AddMessage(f"Processing: {table_name}")
+
+ # Business logic for the worker function
+
+ #geographic_area_sr = rf"{project_folder}\Dataset_Shapefiles\{table_name}\{geographic_area}.prj"
+ # Set the output coordinate system to what is needed for the
+ # DisMAP project
+ #psr = arcpy.SpatialReference(geographic_area_sr)
+ #arcpy.env.outputCoordinateSystem = psr
+ #del geographic_area_sr, geographic_area, psr
+
+ region_raster_mask = rf"{region_gdb}\{table_name}_Raster_Mask"
+ layerspeciesyearimagename = rf"{region_gdb}\{table_name}_LayerSpeciesYearImageName"
+
+ arcpy.env.outputCoordinateSystem = arcpy.Describe(region_raster_mask).spatialReference
+
+ output_rasters = {}
+
+ # ##---> This block creates the folder structure in the Images folder
+ #fields = "DatasetCode;Region;Season;Species;CommonName;SpeciesCommonName;CoreSpecies;Year;StdTime;Variable;Value;Dimensions"
+ fields = ['ImageName', 'Variable', 'Species', 'Year']
+
+ #with arcpy.da.SearchCursor(layerspeciesyearimagename, fields, where_clause = f"TableName = '{table_name}'") as cursor:
+ with arcpy.da.SearchCursor(layerspeciesyearimagename, fields) as cursor:
+ for row in cursor:
+ image_name, variable, species, year = row[0], row[1], row[2], row[3]
+ #if variable not in variables: variables.append(variable)
+ #arcpy.AddMessage(f"{variable}, {image_name}, {species}, {year}")
+ #variable = f"_{variable}" if "Species Richness" in variable else variable
+ if "Species Richness" not in variable:
+ output_raster_path = rf"{project_folder}\Images\{table_name}\{variable}\{image_name}.tif"
+ #arcpy.AddMessage(output_raster_path)
+ output_rasters[image_name] = [image_name, variable, species, year, output_raster_path]
+ image_folder = os.path.dirname(output_raster_path)
+ #arcpy.AddMessage(image_folder)
+ if not os.path.exists(image_folder):
+ os.makedirs(image_folder)
+ #arcpy.AddMessage(output_raster_path)
+ #arcpy.AddMessage(os.path.dirname(output_raster_path))
+ #arcpy.AddMessage(image_folder)
+ del image_folder
+ del output_raster_path
+
+ del row, image_name, variable, species, year
+ del cursor
+
+ del fields
+
+ del layerspeciesyearimagename
+
+ arcpy.AddMessage(f'Generating {table_name} Biomass Rasters')
+
+ sample_locations = rf"{table_name}_Sample_Locations"
+
+ arcpy.AddMessage(f"\tMake Feature Layer for {sample_locations}")
+
+ # Prepare the points layer
+ sample_locations_path = rf"{region_gdb}\{sample_locations}"
+ sample_locations_path_layer = arcpy.management.MakeFeatureLayer(sample_locations_path, "Region Sample Locations Layer")
+ del sample_locations_path
+
+ if summary_product == "Yes":
+ # Add the YearWeights feild
+ fields = [f.name for f in arcpy.ListFields(sample_locations_path_layer) if f.type not in ['Geometry', 'OID']]
+ if "YearWeights" not in fields:
+ # Add the YearWeights field to the Dataset. This is used for the IDW modeling later
+ arcpy.management.AddField(sample_locations_path_layer, "YearWeights", "SHORT", field_alias = "Year Weights")
+ del fields
+
+ getcount = arcpy.management.GetCount(sample_locations_path_layer)[0]
+ arcpy.AddMessage(f'\t{sample_locations} has {getcount} records')
+ del getcount
+
+ for output_raster in output_rasters:
+ image_name, variable, species, year, output_raster_path = output_rasters[output_raster]
+
+ #if not arcpy.Exists(output_raster_path):
+
+ msg = f"\n\t\tImage Name: {output_raster}\n"
+ msg = msg + f"\t\t\tVariable: {variable}\n"
+ msg = msg + f"\t\t\tSpecies: {species}\n"
+ msg = msg + f"\t\t\tYear: {year}\n"
+ msg = msg + f"\t\t\tOutput Raster: {os.path.basename(output_raster_path)}\n"
+ arcpy.AddMessage(msg)
+ del msg
+
+ arcpy.AddMessage('\t\t\tSelect Layer by Attribute: "CLEAR_SELECTION"')
+
+ arcpy.management.SelectLayerByAttribute( sample_locations_path_layer, "CLEAR_SELECTION" )
+
+ arcpy.AddMessage(f"\t\t\tSelect Layer by Attribute: Species = '{species}' AND Year = {year}")
+
+ # Select for species and year
+ #print(sample_locations_path_layer)
+ #print(f"Species = '{species}' AND Year = {year}")
+ arcpy.management.SelectLayerByAttribute( in_layer_or_view = sample_locations_path_layer,
+ selection_type = "NEW_SELECTION",
+ where_clause = f"Species = '{species}' And Year = {year}",
+ invert_where_clause=None
+ )
+
+ # Get the count of records for selected species
+ getcount = arcpy.management.GetCount(sample_locations_path_layer)[0]
+ arcpy.AddMessage(f"\t\t\t{sample_locations} has {getcount} records for {species} and year {year}")
+ del getcount
+
+ arcpy.AddMessage(f"\t\t\tCreating Raster File {output_raster}.tif for {species} and {year}")
+
+ #if summary_product == "Yes":
+
+ arcpy.AddMessage("\t\t\tProcessing IDW")
+
+ # Select weighted years
+ arcpy.management.SelectLayerByAttribute( sample_locations_path_layer,
+ "NEW_SELECTION",
+ f"Species = '{species}' AND Year >= ({year-2}) AND Year <= ({year+2})"
+ )
+
+ # Get the count of records for selected species
+ getcount = arcpy.management.GetCount(sample_locations_path_layer)[0]
+
+ arcpy.AddMessage(f"\t\t\t\t{sample_locations_path_layer} has {getcount} records for {species} and from years {year-2} to {year+2}")
+ del getcount
+
+ # Calculate YearWeights=3-(abs(Tc-Ti))
+ arcpy.management.CalculateField(in_table=sample_locations_path_layer, field="YearWeights", expression=f"3 - (abs({int(year)} - !Year!))", expression_type="PYTHON", code_block="")
+
+ # we need to set the mask and extent of the environment, or the raster and items may not come out correctly.
+ arcpy.env.extent = arcpy.Describe(region_raster_mask).extent
+ arcpy.env.mask = region_raster_mask
+ arcpy.env.snapRaster = region_raster_mask
+
+ # Set variables for search neighborhood
+ majSemiaxis = int(cell_size) * 1000
+ minSemiaxis = int(cell_size) * 1000
+ angle = 0
+ maxNeighbors = 15
+ minNeighbors = 10
+ sectorType = "ONE_SECTOR"
+ searchNeighbourhood = arcpy.SearchNeighborhoodStandard(majSemiaxis, minSemiaxis, angle, maxNeighbors, minNeighbors, sectorType)
+ #print(majSemiaxis, minSemiaxis, angle, maxNeighbors, minNeighbors, sectorType)
+
+ del majSemiaxis, minSemiaxis, angle
+ del maxNeighbors, minNeighbors, sectorType
+
+ # Check out the ArcGIS Geostatistical Analyst extension license
+ arcpy.CheckOutExtension("GeoStats")
+
+ #tmp_raster = os.path.join(ScratchFolder, f"{output_raster}.tif")
+ tmp_raster = f"memory\\{output_raster}"
+
+ #print(sample_locations_path_layer)
+ #print(tmp_raster)
+ #print(cell_size)
+ #print(searchNeighbourhood)
+ #sys.exit()
+
+ # Execute IDW using the selected selected species, years, and MapValue
+ arcpy.ga.IDW(in_features = sample_locations_path_layer,
+ z_field = 'MapValue',
+ out_ga_layer = '',
+ out_raster = tmp_raster,
+ cell_size = cell_size,
+ power = 2,
+ search_neighborhood = searchNeighbourhood,
+ weight_field = "YearWeights")
+
+ del searchNeighbourhood
+
+ arcpy.ClearEnvironment("extent")
+ arcpy.ClearEnvironment("mask")
+ arcpy.ClearEnvironment("snapRaster")
+
+ # Check In GeoStats Extension
+ arcpy.CheckInExtension("GeoStats")
+
+ # Execute Power to convert the raster back to WTCPUE from WTCPUECubeRoot
+ out_cube = arcpy.sa.Power(tmp_raster, 3)
+ #out_cube.save(tmp_raster_power)
+ out_cube.save(output_raster_path)
+ del out_cube
+
+ if tmp_raster:
+ arcpy.management.Delete(tmp_raster)
+ del tmp_raster
+
+ # Reset the YearWeights to None
+ arcpy.management.CalculateField(in_table=sample_locations_path_layer, field="YearWeights", expression="None", expression_type="PYTHON", code_block="")
+
+ # Clear selection
+ arcpy.management.SelectLayerByAttribute( sample_locations_path_layer, "CLEAR_SELECTION" )
+
+ from arcpy import metadata as md
+ tif_md = md.Metadata(output_raster_path)
+ tif_md.title = image_name.replace("_", " ")
+ tif_md.save()
+ tif_md.synchronize("ALWAYS")
+ tif_md.save()
+ del md, tif_md
+
+ arcpy.management.BuildPyramids(
+ in_raster_dataset = output_raster_path,
+ pyramid_level = -1,
+ SKIP_FIRST = "NONE",
+ resample_technique = "BILINEAR",
+ compression_type = "DEFAULT",
+ compression_quality = 75,
+ skip_existing = "OVERWRITE"
+ )
+
+ # Clean up
+ del image_name, variable, species, year, output_raster_path, output_raster
+
+ del sample_locations
+ # Delete sample_locations_path_layer
+ arcpy.management.Delete(sample_locations_path_layer)
+ del sample_locations_path_layer
+
+ # End of business logic for the worker function
+ arcpy.AddMessage(f"Processing for: {table_name} complete")
+
+ # Clean up
+ del output_rasters
+ del region_raster_mask
+
+ # Declared Variables for this function only
+ del region, season, distri_code, summary_product
+ del datasetcode, cell_size
+ # Basic variables
+ del table_name, scratch_folder, project_folder, scratch_workspace
+ # Imports
+ #del dismap
+ # Function parameter
+ del region_gdb
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def script_tool(project_gdb=""):
+ try:
+ # Imports
+ import dismap_tools
+ from create_rasters_director import preprocessing
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+ # Clear Scratch Folder
+ ClearScratchFolder = False
+ if ClearScratchFolder:
+ dismap_tools.clear_folder(folder=rf"{os.path.dirname(project_gdb)}\Scratch")
+ else:
+ pass
+ del ClearScratchFolder
+
+ ## # Set worker parameters
+ ## #table_name = "AI_IDW"
+ ## table_name = "HI_IDW"
+ ## #table_name = "NBS_IDW"
+ ## #table_name = "ENBS_IDW"
+
+ table_names = ["HI_IDW"]
+
+ preprocessing(project_gdb=project_gdb, table_names=table_names, clear_folder=True)
+
+ for table_name in table_names:
+ region_gdb = rf"{os.path.dirname(project_gdb)}\Scratch\{table_name}.gdb"
+ try:
+ pass
+ worker(region_gdb=region_gdb)
+ except SystemExit:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ sys.exit()
+
+ del table_name, region_gdb
+
+ del table_names
+
+ # Declared Varaiables
+ # Imports
+ del dismap_tools, preprocessing
+ # Function Parameters
+ del project_gdb
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ hours, rem = divmod(end_time-start_time, 3600)
+ minutes, seconds = divmod(rem, 60)
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"End Time: {strftime('%a %b %d %I:%M %p', localtime(end_time))}")
+ arcpy.AddMessage(f"Elapsed Time {int(hours):0>2}:{int(minutes):0>2}:{seconds:05.2f} (H:M:S)")
+ arcpy.AddMessage(f"{'-' * 80}")
+ del hours, rem, minutes, seconds
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == '__main__':
+ try:
+ project_gdb = arcpy.GetParameterAsText(0)
+ if not project_gdb:
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
+ else:
+ pass
+ script_tool(project_gdb)
+ arcpy.SetParameterAsText(1, "Result")
+ del project_gdb
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_region_bathymetry_director.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_region_bathymetry_director.py
new file mode 100644
index 0000000..0b5519d
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_region_bathymetry_director.py
@@ -0,0 +1,450 @@
+# -*- coding: utf-8 -*-
+#-------------------------------------------------------------------------------
+# Name: module1
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 05/03/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+#-------------------------------------------------------------------------------
+import os
+import sys # built-ins first
+import traceback
+
+import arcpy # third-parties second
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def preprocessing(project_gdb="", table_names="", clear_folder=True):
+ try:
+ import dismap_tools
+
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic arcpy.env variables
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ # Set varaibales
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = os.path.join(project_folder, "Scratch")
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
+ csv_data_folder = os.path.join(project_folder, "CSV_Data")
+ base_project_bathymetry_gdb = os.path.join(os.path.dirname(project_folder), "Bathymetry\\Bathymetry.gdb")
+
+## # Clear Scratch Folder
+## #ClearScratchFolder = True
+## #if ClearScratchFolder:
+## if clear_folder:
+## dismap_tools.clear_folder(folder = scratch_folder)
+## else:
+## pass
+## #del ClearScratchFolder
+## del clear_folder
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ del project_folder, scratch_workspace
+
+ if not table_names:
+ table_names = [row[0] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"), "TableName", where_clause = "TableName LIKE '%_IDW'")]
+ else:
+ pass
+
+ for table_name in table_names:
+ arcpy.AddMessage(f"Pre-Processing: {table_name}")
+
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ region_scratch_workspace = os.path.join(scratch_folder, f"{table_name}", "scratch.gdb")
+
+ # Create Scratch Workspace for Region
+ if not arcpy.Exists(region_scratch_workspace):
+ os.makedirs(os.path.join(scratch_folder, table_name))
+ if not arcpy.Exists(region_scratch_workspace):
+ arcpy.AddMessage(f"Create File GDB: '{table_name}'")
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, table_name), "scratch")
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del region_scratch_workspace
+ # # # CreateFileGDB
+ arcpy.AddMessage(f"Creating File GDB: '{table_name}'")
+ arcpy.management.CreateFileGDB(scratch_folder, table_name)
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # # # CreateFileGDB
+
+ # # # Datasets
+ # Process: Make Table View (Make Table View) (management)
+ datasets = rf'{project_gdb}\Datasets'
+ arcpy.AddMessage(f"'{os.path.basename(datasets)}' has {arcpy.management.GetCount(datasets)[0]} records")
+ arcpy.management.Copy(datasets, os.path.join(region_gdb, "Datasets"))
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del datasets
+ # # # Datasets
+
+ # # # Fishnet
+ region_fishnet = os.path.join(project_gdb, f"{table_name}_Fishnet")
+ arcpy.AddMessage(f"The table '{table_name}_Fishnet' has {arcpy.management.GetCount(region_fishnet)[0]} records")
+ arcpy.management.Copy(region_fishnet, os.path.join(region_gdb, f"{table_name}_Fishnet"))
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del region_fishnet
+ # # # Fishnet
+
+ # # # Raster_Mask
+ region_raster_mask = os.path.join(project_gdb, f"{table_name}_Raster_Mask")
+ arcpy.AddMessage(f"Copy Raster Mask for '{table_name}'")
+ #arcpy.management.Copy(os.path.join(project_gdb, f"{table_name}_Raster_Mask"), os.path.join(region_gdb, f"{table_name}_Raster_Mask"))
+ arcpy.management.CopyRaster(region_raster_mask, os.path.join(region_gdb, f"{table_name}_Raster_Mask"))
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del region_raster_mask
+ # # # Raster_Mask
+
+ # # # Bathymetry
+ base_fishnet_bathymetry = os.path.join(base_project_bathymetry_gdb, f"{table_name}_Bathymetry")
+ arcpy.AddMessage(f"Copy Bathymetry for '{table_name}'")
+ #arcpy.management.Copy(os.path.join(project_bathymetry_gdb, f"{table_name}_Bathymetry"), os.path.join(region_gdb, f"{table_name}_Fishnet_Bathymetry"))
+ arcpy.management.CopyRaster(base_fishnet_bathymetry, os.path.join(region_gdb, f"{table_name}_Fishnet_Bathymetry"))
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del base_fishnet_bathymetry
+ # # # Bathymetry
+
+ # Declared Variables
+ del table_name
+
+ # Declared Variables
+ del scratch_folder, region_gdb
+ del csv_data_folder, base_project_bathymetry_gdb
+ # Imports
+ del dismap_tools
+ # Function Parameters
+ del project_gdb, table_names
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def director(project_gdb="", Sequential=True, table_names=[]):
+ try:
+ # Imports
+ import dismap_tools
+ from create_region_bathymetry_worker import worker
+
+ # Test if passed workspace exists, if not sys.exit()
+ if not arcpy.Exists(project_gdb):
+ arcpy.AddError(f"{os.path.basename(project_gdb)} is missing!!")
+ arcpy.AddError(arcpy.GetMessages(2))
+ sys.exit()
+ else:
+ pass
+
+ # Set History and Metadata logs, set serverity and message level
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{os.path.dirname(project_gdb)}\Scratch"
+ scratch_workspace = rf"{os.path.dirname(project_gdb)}\Scratch\scratch.gdb"
+ csv_data_folder = rf"{os.path.dirname(project_gdb)}\CSV_Data"
+ #project_bathymetry_gdb = rf"{os.path.dirname(project_gdb)}\Bathymetry\Bathymetry.gdb"
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+
+ preprocessing(project_gdb=project_gdb, table_names=table_names, clear_folder=True)
+
+ del project_folder, scratch_workspace
+
+ # Sequential Processing
+ if Sequential:
+ arcpy.AddMessage("Sequential Processing")
+ for i in range(0, len(table_names)):
+ arcpy.AddMessage(f"Processing: {table_names[i]}")
+ table_name = table_names[i]
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ try:
+ pass
+ worker(region_gdb=region_gdb)
+ except: # noqa: E722
+ traceback.print_exc()
+ del region_gdb, table_name
+ del i
+ else:
+ pass
+
+ # Non-Sequential Processing
+ if not Sequential:
+ import multiprocessing
+ from time import time, localtime, strftime, sleep, gmtime
+ arcpy.AddMessage("Sequential Processing")
+ #Set multiprocessing exe in case we're running as an embedded process, i.e ArcGIS
+ #get_install_path() uses a registry query to figure out 64bit python exe if available
+ multiprocessing.set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
+ # Get CPU count and then take 2 away for other process
+ _processes = multiprocessing.cpu_count() - 2
+ _processes = _processes if len(table_names) >= _processes else len(table_names)
+ arcpy.AddMessage(f"Creating the multiprocessing Pool with {_processes} processes")
+ #Create a pool of workers, keep one cpu free for surfing the net.
+ #Let each worker process only handle 1 task before being restarted (in case of nasty memory leaks)
+ with multiprocessing.Pool(processes=_processes, maxtasksperchild=1) as pool:
+ arcpy.AddMessage("\tPrepare arguments for processing")
+ # Use apply_async so we can handle exceptions gracefully
+ jobs={}
+ for i in range(0, len(table_names)):
+ try:
+ arcpy.AddMessage(f"Processing: {table_names[i]}")
+ table_name = table_names[i]
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ jobs[table_name] = pool.apply_async(worker, [region_gdb])
+ del table_name, region_gdb
+ except: # noqa: E722
+ pool.terminate()
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ del i
+ all_finished = False
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ result_completed = {}
+ while True:
+ all_finished = True
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ arcpy.AddMessage(f"\nStart Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage("Have the workers finished?")
+ finish_time = strftime('%a %b %d %I:%M %p', localtime())
+ time_elapsed = u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time)))
+ arcpy.AddMessage(f"It's {finish_time}\n{time_elapsed}")
+ finish_time = f"{finish_time}.\n\t{time_elapsed}"
+ del time_elapsed
+ for table_name, result in jobs.items():
+ if result.ready():
+ if table_name not in result_completed:
+ result_completed[table_name] = finish_time
+ try:
+ # wait for and get the result from the task
+ result.get()
+ except SystemExit:
+ pool.terminate()
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ pool.terminate()
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ pass
+ arcpy.AddMessage(f"Process {table_name}\n\tFinished on {result_completed[table_name]}")
+ else:
+ all_finished = False
+ arcpy.AddMessage(f"Process {table_name} is running. . .")
+ del table_name, result
+ del elapse_time, end_time, finish_time
+ if all_finished:
+ break
+ sleep(_processes * 7.5)
+ del result_completed
+ del start_time
+ del all_finished
+ arcpy.AddMessage("\tClose the process pool")
+ # close the process pool
+ pool.close()
+ # wait for all tasks to complete and processes to close
+ arcpy.AddMessage("\tWait for all tasks to complete and processes to close")
+ pool.join()
+ # Just in case
+ pool.terminate()
+ del pool
+ del jobs
+ del _processes
+ del time, multiprocessing, localtime, strftime, sleep, gmtime
+
+ arcpy.AddMessage("\tDone with multiprocessing Pool")
+
+ # Post-Processing
+ arcpy.AddMessage("Post-Processing Begins")
+ arcpy.AddMessage("Processing Results")
+
+ datasets = list()
+
+ walk = arcpy.da.Walk(scratch_folder, datatype="RasterDataset", type=[])
+ for dirpath, dirnames, filenames in walk:
+ for filename in filenames:
+ datasets.append(os.path.join(dirpath, filename))
+ del filename
+ del dirpath, dirnames, filenames
+ del walk
+
+ for dataset in datasets:
+ dataset_short_path = f"..{'/'.join(__file__.split(os.sep)[-4:])}"
+ #arcpy.AddMessage(fc_short_path)
+ dataset_name = os.path.basename(dataset)
+ region_gdb = os.path.dirname(dataset)
+ arcpy.AddMessage(f"\tDataset: '{dataset_name}'")
+ arcpy.AddMessage(f"\t\tPath: '{dataset_short_path}'")
+ arcpy.AddMessage(f"\t\tRegion GDB: '{os.path.basename(region_gdb)}'")
+
+## if arcpy.Exists(rf"{project_gdb}\{dataset_name}"):
+## arcpy.management.Delete(rf"{project_gdb}\{dataset_name}")
+## else:
+## pass
+
+ arcpy.management.CopyRaster(dataset, rf"{project_gdb}\{dataset_name}")
+ arcpy.AddMessage("\tCopy: {0} {1}\n".format(f"{dataset_name}", arcpy.GetMessages(0).replace("\n", '\n\t')))
+
+ desc = arcpy.da.Describe(dataset)
+ if desc["dataType"] in ["FeatureClass", "Table", "MosaicDataset"]:
+ dismap_tools.alter_fields(csv_data_folder, rf"{project_gdb}\{dataset_name}")
+ del desc
+
+ del region_gdb, dataset_name, dataset_short_path, dataset
+
+ del datasets
+
+ arcpy.AddMessage(f"Compacting the {os.path.basename(project_gdb)} GDB")
+ arcpy.management.Compact(project_gdb)
+ arcpy.AddMessage("\t"+arcpy.GetMessages(0).replace("\n", "\n\t"))
+
+ # Declared Variables
+ del csv_data_folder, scratch_folder
+ # Imports
+ del dismap_tools, worker
+ # Function Parameters
+ del project_gdb, Sequential, table_names
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def script_tool(project_gdb=""):
+ try:
+ # Imports
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: ..{'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+ try:
+ # "AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW", "NEUS_FAL_IDW", "NEUS_SPR_IDW",
+ # "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",
+
+ Test = False
+ if Test:
+ director(project_gdb=project_gdb, Sequential=True, table_names=["HI_IDW"])
+ else:
+ director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW",])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",])
+ del Test
+
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+
+ # Declared Varaiables
+ # Imports
+
+ # Function Parameters
+ del project_gdb
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ hours, rem = divmod(end_time-start_time, 3600)
+ minutes, seconds = divmod(rem, 60)
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"End Time: {strftime('%a %b %d %I:%M %p', localtime(end_time))}")
+ arcpy.AddMessage(f"Elapsed Time {int(hours):0>2}:{int(minutes):0>2}:{seconds:05.2f} (H:M:S)")
+ arcpy.AddMessage(f"{'-' * 80}")
+ del hours, rem, minutes, seconds
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == '__main__':
+ try:
+ project_gdb = arcpy.GetParameterAsText(0)
+ if not project_gdb:
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
+ else:
+ pass
+ script_tool(project_gdb)
+ arcpy.SetParameterAsText(1, "Result")
+ del project_gdb
+
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_region_bathymetry_worker.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_region_bathymetry_worker.py
new file mode 100644
index 0000000..a7b5ffc
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_region_bathymetry_worker.py
@@ -0,0 +1,288 @@
+# -*- coding: utf-8 -*-
+#-------------------------------------------------------------------------------
+# Name: module1
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 05/03/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+#-------------------------------------------------------------------------------
+import os
+import sys
+import traceback
+
+import arcpy # third-parties second
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def worker(region_gdb=""):
+ try:
+ # Test if passed workspace exists, if not sys.exit()
+ if not arcpy.Exists(rf"{region_gdb}"):
+ sys.exit()(f"{os.path.basename(region_gdb)} is missing!!")
+
+ # Imports
+ from arcpy import metadata as md
+ import dismap_tools
+
+ # Set History and Metadata logs, set serverity and message level
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic workkpace variables
+ table_name = os.path.basename(region_gdb).replace(".gdb","")
+ scratch_folder = os.path.dirname(region_gdb)
+ project_folder = os.path.dirname(scratch_folder)
+ csv_data_folder = rf"{project_folder}\CSV_Data"
+ scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
+
+ arcpy.AddMessage(f"Table Name: {table_name}\nProject Folder: {os.path.basename(project_folder)}\nScratch Folder: {os.path.basename(scratch_folder)}\n")
+
+ # Set basic workkpace variables
+ arcpy.env.workspace = region_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.env.compression = "LZ77"
+ #arcpy.env.geographicTransformations = "WGS_1984_(ITRF08)_To_NAD_1983_2011"
+ arcpy.env.pyramid = "PYRAMIDS -1 BILINEAR LZ77 NO_SKIP"
+ arcpy.env.resamplingMethod = "BILINEAR"
+ arcpy.env.rasterStatistics = "STATISTICS 1 1"
+ #arcpy.env.XYResolution = "0.1 Meters"
+ #arcpy.env.XYResolution = "0.01 Meters"
+ #arcpy.env.cellAlignment = "ALIGN_WITH_PROCESSING_EXTENT" # Set the cell alignment environment using a keyword.
+
+ # DatasetCode, CSVFile, TransformUnit, TableName, GeographicArea, CellSize,
+ # PointFeatureType, FeatureClassName, Region, Season, DateCode, Status,
+ # DistributionProjectCode, DistributionProjectName, SummaryProduct,
+ # FilterRegion, FilterSubRegion, FeatureServiceName, FeatureServiceTitle,
+ # MosaicName, MosaicTitle, ImageServiceName, ImageServiceTitle
+
+ # Get values for table_name from Datasets table
+ #fields = ["TableName", "GeographicArea", "DatasetCode", "CellSize", "MosaicName", "MosaicTitle"]
+ #region_list = [row for row in arcpy.da.SearchCursor(rf"{region_gdb}\Datasets", fields, where_clause = f"TableName = '{table_name}'")][0]
+ #del fields
+
+ # Assigning variables from items in the chosen table list
+ # ['AI_IDW', 'AI_IDW_Region', 'AI', 'Aleutian Islands', None, 'IDW']
+ #table_name = region_list[0]
+ #geographic_area = region_list[1]
+ #datasetcode = region_list[2]
+ #cell_size = region_list[3]
+ #mosaic_name = region_list[4]
+ #mosaic_title = region_list[5]
+ #del region_list
+
+ # Start of business logic for the worker function
+ arcpy.AddMessage(f"Processing: {table_name}")
+
+ # Input
+ region_fishnet = os.path.join(region_gdb, f"{table_name}_Fishnet")
+ region_raster_mask = os.path.join(region_gdb, f"{table_name}_Raster_Mask")
+ region_fishnet_bathymetry = os.path.join(region_gdb, f"{table_name}_Fishnet_Bathymetry")
+ # Output
+ region_bathymetry = os.path.join(region_gdb, f"{table_name}_Bathymetry")
+
+ # Get the reference system defined for the region in datasets
+ # Set the output coordinate system to what is needed for the
+ # DisMAP project
+ region_prj = arcpy.Describe(region_raster_mask).spatialReference
+ #arcpy.AddMessage(f"region_prj: {region_prj}")
+ if region_prj.linearUnitName == "Kilometer":
+ arcpy.env.cellSize = 1
+ arcpy.env.XYResolution = 0.1
+ arcpy.env.XYResolution = 1.0
+ elif region_prj.linearUnitName == "Meter":
+ arcpy.env.cellSize = 1000
+ arcpy.env.XYResolution = 0.0001
+ arcpy.env.XYResolution = 0.001
+
+ # Process: Point to Raster Mask
+ arcpy.env.outputCoordinateSystem = region_prj
+ arcpy.env.cellSize = int(arcpy.Describe(f"{region_raster_mask}/Band_1").meanCellWidth)
+ arcpy.env.extent = arcpy.Describe(region_raster_mask).extent
+ arcpy.env.mask = region_raster_mask
+ arcpy.env.snapRaster = region_raster_mask
+
+ del region_prj
+
+ arcpy.AddMessage(f"\tCalculating Zonal Statistics using {os.path.basename(region_fishnet)} and {os.path.basename(region_fishnet_bathymetry)} to create {os.path.basename(region_bathymetry)}")
+ # Execute ZonalStatistics
+ #out_raster = arcpy.sa.ZonalStatistics(region_fishnet, "OID", region_fishnet_bathymetry, "MEDIAN", "NODATA")
+ #out_raster = arcpy.sa.ZonalStatistics(region_fishnet, "OID", region_fishnet_bathymetry, "MEDIAN", "DATA")
+
+ with arcpy.EnvManager(scratchWorkspace = arcpy.env.scratchWorkspace):
+ #print(region_fishnet)
+ #rint(region_fishnet_bathymetry)
+ out_raster = arcpy.sa.ZonalStatistics(
+ in_zone_data = region_fishnet,
+ zone_field = "OID",
+ in_value_raster = region_fishnet_bathymetry,
+ statistics_type = "MEDIAN",
+ ignore_nodata = "DATA",
+ process_as_multidimensional = "CURRENT_SLICE",
+ percentile_value = 90,
+ percentile_interpolation_type = "AUTO_DETECT",
+ circular_calculation = "ARITHMETIC",
+ circular_wrap_value = 360
+ )
+
+ arcpy.AddMessage("\tZonal Statistics: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # Save the output
+ out_raster.save(region_bathymetry)
+ arcpy.AddMessage("\tSave: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del out_raster
+
+ dismap_tools.import_metadata(csv_data_folder, region_bathymetry)
+
+ del region_bathymetry
+
+ arcpy.management.Delete(os.path.join(region_gdb, "Datasets"))
+ arcpy.AddMessage("\tDelete: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ arcpy.management.Delete(region_raster_mask)
+ arcpy.AddMessage("\tDelete: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ arcpy.management.Delete(region_fishnet)
+ arcpy.AddMessage("\tDelete: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ arcpy.management.Delete(region_fishnet_bathymetry)
+ arcpy.AddMessage("\tDelete: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del region_raster_mask, region_fishnet, region_fishnet_bathymetry
+
+ arcpy.management.Compact(region_gdb)
+
+ # Declared Variables for this function only
+ del scratch_folder, scratch_workspace
+ del table_name, project_folder, csv_data_folder
+ # Imports
+ del md, dismap_tools
+ # Function parameter
+ del region_gdb
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def script_tool(project_gdb=""):
+ try:
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+ ## # Set worker parameters
+ ## #table_name = "AI_IDW"
+ ## table_name = "HI_IDW"
+ ## #table_name = "NBS_IDW"
+ ## #table_name = "ENBS_IDW"
+
+ table_names = ["NBS_IDW",]
+
+ from create_region_bathymetry_director import preprocessing
+
+ preprocessing(project_gdb=project_gdb, table_names=table_names, clear_folder=True)
+
+ del preprocessing
+
+ for table_name in table_names:
+ region_gdb = rf"{os.path.dirname(project_gdb)}\Scratch\{table_name}.gdb"
+ try:
+ pass
+ worker(region_gdb=region_gdb)
+ except SystemExit:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ del table_name, region_gdb
+ del table_names
+
+ # Declared Varaiables
+ # Imports
+ # Function Parameters
+ del project_gdb
+
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ hours, rem = divmod(end_time-start_time, 3600)
+ minutes, seconds = divmod(rem, 60)
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"End Time: {strftime('%a %b %d %I:%M %p', localtime(end_time))}")
+ arcpy.AddMessage(f"Elapsed Time {int(hours):0>2}:{int(minutes):0>2}:{seconds:05.2f} (H:M:S)")
+ arcpy.AddMessage(f"{'-' * 80}")
+ del hours, rem, minutes, seconds
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == '__main__':
+ try:
+ project_gdb = arcpy.GetParameterAsText(0)
+ if not project_gdb:
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
+ else:
+ pass
+ script_tool(project_gdb)
+ arcpy.SetParameterAsText(1, "Result")
+ del project_gdb
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_region_fishnets_director.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_region_fishnets_director.py
new file mode 100644
index 0000000..cbca10b
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_region_fishnets_director.py
@@ -0,0 +1,396 @@
+# -*- coding: utf-8 -*-
+#-------------------------------------------------------------------------------
+# Name: create_region_fishnets_director.py
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 25/02/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+#-------------------------------------------------------------------------------
+import os
+import sys
+import traceback
+import inspect
+
+import arcpy # third-parties second
+
+def director(project_gdb="", Sequential=True, table_names=[]):
+ try:
+ # Imports
+ import dismap_tools
+ from create_region_fishnets_worker import worker
+
+ # Set History and Metadata logs, set serverity and message level
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic workkpace variables
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = os.path.join(project_folder, "Scratch")
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
+ csv_data_folder = rf"{project_folder}\CSV_Data"
+
+ # Clear Scratch Folder
+ dismap_tools.clear_folder(folder=scratch_folder)
+
+ # Create Scratch Workspace for Project
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ if not arcpy.Exists(scratch_folder):
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", "scratch")
+
+ # Set basic workkpace variables
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ del project_folder
+
+ if not table_names:
+ table_names = [row[0] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"),"TableName", where_clause = "TableName LIKE '%_IDW'")]
+ else:
+ pass
+
+ # Pre Processing
+ for table_name in table_names:
+ arcpy.AddMessage(f"Pre-Processing: {table_name}")
+
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ region_scratch_workspace = os.path.join(scratch_folder, f"{table_name}", "scratch.gdb")
+
+ # Create Scratch Workspace for Region
+ if not arcpy.Exists(region_scratch_workspace):
+ os.makedirs(os.path.join(scratch_folder, table_name))
+ if not arcpy.Exists(region_scratch_workspace):
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, table_name), "scratch")
+ del region_scratch_workspace
+
+ datasets = [os.path.join(project_gdb, "Datasets"), os.path.join(project_gdb, f"{table_name}_Region")]
+ if not any(arcpy.management.GetCount(d)[0] == 0 for d in datasets):
+ if not arcpy.Exists(os.path.join(scratch_folder, f"{table_name}.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ else:
+ pass
+
+ arcpy.management.Copy(os.path.join(project_gdb, "Datasets"), rf"{region_gdb}\Datasets")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.management.Copy(os.path.join(project_gdb, f"{table_name}_Region"), rf"{region_gdb}\{table_name}_Region")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ else:
+ arcpy.AddWarning("One or more datasets contains zero records!!")
+ for d in datasets:
+ arcpy.AddMessage(f"\t{os.path.basename(d)} has {arcpy.management.GetCount(d)[0]} records")
+ del d
+ arcpy.AddError(f"SystemExit at line number: '{traceback.extract_stack()[-1].lineno}'")
+ sys.exit()
+
+ if "datasets" in locals().keys():
+ del datasets
+
+ del region_gdb, table_name
+
+ del scratch_workspace
+
+ # Sequential Processing
+ if Sequential:
+ arcpy.AddMessage("Sequential Processing")
+ for i in range(0, len(table_names)):
+ arcpy.AddMessage(f"Processing: {table_names[i]}")
+ table_name = table_names[i]
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ try:
+ pass
+ worker(region_gdb=region_gdb)
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ del region_gdb, table_name
+ del i
+
+ # Non-Sequential Processing
+ if not Sequential:
+ import multiprocessing
+ from time import time, localtime, strftime, sleep, gmtime
+ arcpy.AddMessage("Sequential Processing")
+ #Set multiprocessing exe in case we're running as an embedded process, i.e ArcGIS
+ #get_install_path() uses a registry query to figure out 64bit python exe if available
+ multiprocessing.set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
+ # Get CPU count and then take 2 away for other process
+ _processes = multiprocessing.cpu_count() - 2
+ _processes = _processes if len(table_names) >= _processes else len(table_names)
+ arcpy.AddMessage(f"Creating the multiprocessing Pool with {_processes} processes")
+ #Create a pool of workers, keep one cpu free for surfing the net.
+ #Let each worker process only handle 1 task before being restarted (in case of nasty memory leaks)
+ with multiprocessing.Pool(processes=_processes, maxtasksperchild=1) as pool:
+ arcpy.AddMessage("\tPrepare arguments for processing")
+ # Use apply_async so we can handle exceptions gracefully
+ jobs={}
+ for i in range(0, len(table_names)):
+ try:
+ arcpy.AddMessage(f"Processing: {table_names[i]}")
+ table_name = table_names[i]
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ jobs[table_name] = pool.apply_async(worker, [region_gdb])
+ del table_name, region_gdb
+ except: # noqa: E722
+ pool.terminate()
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ del i
+ all_finished = False
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ result_completed = {}
+ while True:
+ all_finished = True
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ arcpy.AddMessage(f"\nStart Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage("Have the workers finished?")
+ finish_time = strftime('%a %b %d %I:%M %p', localtime())
+ time_elapsed = u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time)))
+ arcpy.AddMessage(f"It's {finish_time}\n{time_elapsed}")
+ finish_time = f"{finish_time}.\n\t{time_elapsed}"
+ del time_elapsed
+ for table_name, result in jobs.items():
+ if result.ready():
+ if table_name not in result_completed:
+ result_completed[table_name] = finish_time
+ try:
+ # wait for and get the result from the task
+ result.get()
+ except SystemExit:
+ pool.terminate()
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ pool.terminate()
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ pass
+ arcpy.AddMessage(f"Process {table_name}\n\tFinished on {result_completed[table_name]}")
+ else:
+ all_finished = False
+ arcpy.AddMessage(f"Process {table_name} is running. . .")
+ del table_name, result
+ del elapse_time, end_time, finish_time
+ if all_finished:
+ break
+ sleep(_processes * 7.5)
+ del result_completed
+ del start_time
+ del all_finished
+ arcpy.AddMessage("\tClose the process pool")
+ # close the process pool
+ pool.close()
+ # wait for all tasks to complete and processes to close
+ arcpy.AddMessage("\tWait for all tasks to complete and processes to close")
+ pool.join()
+ # Just in case
+ pool.terminate()
+ del pool
+ del jobs
+ del _processes
+ del time, multiprocessing, localtime, strftime, sleep, gmtime
+ arcpy.AddMessage("\tDone with multiprocessing Pool")
+
+ arcpy.AddMessage("Post-Processing")
+ arcpy.AddMessage("Processing Results")
+ datasets = list()
+ #walk = arcpy.da.Walk(scratch_folder, datatype="FeatureClass", type=["Polyline", "Polygon"])
+ walk = arcpy.da.Walk(scratch_folder)
+ for dirpath, dirnames, filenames in walk:
+ for filename in filenames:
+ datasets.append(os.path.join(dirpath, filename))
+ del filename
+ del dirpath, dirnames, filenames
+ del walk
+ for dataset in datasets:
+ datasets_short_path = f".. {'/'.join(dataset.split(os.sep)[-4:])}"
+ dataset_name = os.path.basename(dataset)
+ region_gdb = os.path.dirname(dataset)
+ arcpy.AddMessage(f"\tDataset: '{dataset_name}'")
+ arcpy.AddMessage(f"\t\tPath: '{datasets_short_path}'")
+ arcpy.AddMessage(f"\t\tRegion GDB: '{os.path.basename(region_gdb)}'")
+ arcpy.management.Copy(dataset, rf"{project_gdb}\{dataset_name}")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ arcpy.management.Delete(dataset)
+ arcpy.AddMessage("\tDelete: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ arcpy.management.Compact(region_gdb)
+ arcpy.AddMessage("\tCompact: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del region_gdb
+ del dataset
+ del dataset_name
+ del datasets_short_path
+ del datasets
+ arcpy.AddMessage(f"Compacting the {os.path.basename(project_gdb)} GDB")
+ arcpy.management.Compact(project_gdb)
+ arcpy.AddMessage("\t"+arcpy.GetMessages(0).replace("\n", "\n\t"))
+ # Declared Variables assigned in function
+ del scratch_folder, csv_data_folder
+ # Imports
+ del dismap_tools, worker
+ # Function Parameters
+ del project_gdb, Sequential, table_names
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except SystemExit as se:
+ arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
+ sys.exit()
+ except Exception as e:
+ arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+ finally:
+ pass
+
+def script_tool(project_gdb=""):
+ try:
+ # Imports
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+ # Set varaibales
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{project_folder}\Scratch"
+ del project_folder
+
+ # Create project scratch workspace, if missing
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ if not arcpy.Exists(scratch_folder):
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", "scratch")
+ del scratch_folder
+
+ # Set basic arcpy.env variables
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ try:
+ pass
+ # "AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW", "NEUS_FAL_IDW", "NEUS_SPR_IDW",
+ # "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",
+
+ test = False
+ if test:
+ director(project_gdb=project_gdb, Sequential=True, table_names=["NBS_IDW", "SEUS_FAL_IDW"])
+ else:
+ director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW",])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",])
+ del test
+
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+
+ # Declared Varaiables
+ # Imports
+ # Function Parameters
+ del project_gdb
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ hours, rem = divmod(end_time-start_time, 3600)
+ minutes, seconds = divmod(rem, 60)
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"End Time: {strftime('%a %b %d %I:%M %p', localtime(end_time))}")
+ arcpy.AddMessage(f"Elapsed Time {int(hours):0>2}:{int(minutes):0>2}:{seconds:05.2f} (H:M:S)")
+ arcpy.AddMessage(f"{'-' * 80}")
+ del hours, rem, minutes, seconds
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddError(arcpy.GetMessages(2))
+ except SystemExit as se:
+ arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
+ sys.exit()
+ except Exception as e:
+ arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+ finally:
+ pass
+
+if __name__ == '__main__':
+ try:
+ project_gdb = arcpy.GetParameterAsText(0)
+ if not project_gdb:
+ #project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\February 1 2026\February 1 2026.gdb"
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
+ else:
+ pass
+ script_tool(project_gdb)
+ arcpy.SetParameterAsText(1, "Result")
+ del project_gdb
+ except: # noqa: E722
+ traceback.print_exc()
+ else:
+ pass
+ finally:
+ pass
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_region_fishnets_worker.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_region_fishnets_worker.py
new file mode 100644
index 0000000..f87f6a4
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_region_fishnets_worker.py
@@ -0,0 +1,657 @@
+# -*- coding: utf-8 -*-
+#-------------------------------------------------------------------------------
+# Name: create_region_fishnets_worker.py
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 25/02/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+#-------------------------------------------------------------------------------
+import os
+import sys
+import traceback
+import inspect
+
+import arcpy # third-parties second
+
+def worker(region_gdb=""):
+ try:
+ # Test if passed workspace exists, if not sys.exit()
+ if not arcpy.Exists(rf"{region_gdb}"):
+ sys.exit()(f"{os.path.basename(region_gdb)} is missing!!")
+
+ # Imports
+ from arcpy import metadata as md
+ import dismap_tools
+
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ table_name = os.path.basename(region_gdb).replace(".gdb","")
+ scratch_folder = os.path.dirname(region_gdb)
+ project_folder = os.path.dirname(scratch_folder)
+ csv_data_folder = rf"{project_folder}\CSV_Data"
+ scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
+
+ #arcpy.AddMessage(f"Table Name: {table_name}\nProject Folder: {os.path.basename(project_folder)}\nScratch Folder: {os.path.basename(scratch_folder)}\n")
+
+ del scratch_folder, project_folder
+
+ arcpy.env.workspace = region_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.env.compression = "LZ77"
+ #arcpy.env.geographicTransformations = "WGS_1984_(ITRF08)_To_NAD_1983_2011"
+ arcpy.env.pyramid = "PYRAMIDS -1 BILINEAR DEFAULT 75 NO_SKIP NO_SIPS"
+ arcpy.env.resamplingMethod = "BILINEAR"
+ arcpy.env.rasterStatistics = "STATISTICS 1 1"
+ #arcpy.env.XYTolerance = "0.1 Meters"
+ #arcpy.env.XYResolution = "0.01 Meters"
+
+ # DatasetCode, CSVFile, TransformUnit, TableName, GeographicArea, CellSize,
+ # PointFeatureType, FeatureClassName, Region, Season, DateCode, Status,
+ # DistributionProjectCode, DistributionProjectName, SummaryProduct,
+ # FilterRegion, FilterSubRegion, FeatureServiceName, FeatureServiceTitle,
+ # MosaicName, MosaicTitle, ImageServiceName, ImageServiceTitle
+
+ fields = ["TableName", "CellSize",]
+ region_list = [row for row in arcpy.da.SearchCursor(rf"{region_gdb}\Datasets", fields, where_clause = f"TableName = '{table_name}'")][0]
+ del fields
+
+ # Assigning variables from items in the chosen table list
+ # ['AI_IDW', 'AI_IDW_Region', 'AI', 'Aleutian Islands', None, 'IDW']
+ table_name = region_list[0]
+ cell_size = region_list[1]
+ del region_list
+
+ process_region = rf"{region_gdb}\{table_name}_Region"
+ region_raster_mask = rf"{table_name}_Raster_Mask"
+ region_extent_points = rf"{table_name}_Extent_Points"
+ region_fishnet = rf"{table_name}_Fishnet"
+ region_lat_long = rf"{table_name}_Lat_Long"
+ region_latitude = rf"{table_name}_Latitude"
+ region_longitude = rf"{table_name}_Longitude"
+ region_name = rf"{table_name}_Region"
+
+ arcpy.AddMessage(f"Region: {region_name}")
+ arcpy.AddMessage(f"Region GDB: {os.path.basename(arcpy.env.workspace)}")
+ arcpy.AddMessage(f"Scratch GDB: {os.path.basename(arcpy.env.scratchWorkspace)}")
+
+ psr = arcpy.Describe(process_region).spatialReference
+ arcpy.env.outputCoordinateSystem = psr
+ arcpy.AddMessage(f"\t\tSpatial Reference: {psr.name}")
+ # Set coordinate system of the output fishnet
+ # 4326 - World Geodetic System 1984 (WGS 84) and 3857 - Web Mercator
+ # Spatial Reference factory code of 4326 is : GCS_WGS_1984
+ # Spatial Reference factory code of 5714 is : Mean Sea Level (Height)
+ # sr = arcpy.SpatialReference(4326, 5714)
+ #gsr = arcpy.SpatialReference(4326, 5714)
+ gsr = arcpy.SpatialReference(4326)
+
+ #arcpy.AddMessage("process_region")
+ #arcpy.AddMessage(f"Spatial Reference: {str(arcpy.Describe(process_region).spatialReference.name)}")
+ #arcpy.AddMessage(f"Extent: {str(arcpy.Describe(process_region).extent).replace(' NaN', '')}")
+ #arcpy.AddMessage(f"Output Coordinate System: {arcpy.env.outputCoordinateSystem.name}")
+ #arcpy.AddMessage(f"Geographic Transformations: {arcpy.env.geographicTransformations}")
+
+ # Creating Raster Mask
+ arcpy.AddMessage(f"Creating Raster Mask: {table_name}_Raster_Mask")
+
+ cell_size = [row[0] for row in arcpy.da.SearchCursor(rf"{region_gdb}\Datasets", "CellSize", where_clause = f"GeographicArea = '{region_name}'")][0]
+
+ arcpy.management.CalculateField(rf"{process_region}", "ID", 1)
+ arcpy.AddMessage("\tCalculate Field 'ID' for {0}:\n\t\t{1}\n".format(f"{region_name}", arcpy.GetMessages(0).replace("\n", "\n\t\t")))
+
+ arcpy.conversion.FeatureToRaster(rf"{process_region}", "ID", rf"{region_gdb}\{region_raster_mask}", cell_size)
+ arcpy.AddMessage("\tFeature To Raster for {0}:\n\t\t{1}\n".format(f"{region_name}", arcpy.GetMessages(0).replace("\n", "\n\t\t")))
+
+ arcpy.management.DeleteField(rf"{process_region}", "ID")
+ arcpy.AddMessage("\tDelete Field 'ID' field in {0}:\n\t\t{1}\n".format(f"{region_name}", arcpy.GetMessages(0).replace("\n", "\n\t\t")))
+
+ #del edit
+
+ # Creating Extent Points
+ arcpy.AddMessage(f"Creating Extent Points: {region_extent_points}")
+
+ extent = arcpy.Describe(process_region).extent
+ X_Min, Y_Min, X_Max, Y_Max = extent.XMin, extent.YMin, extent.XMax, extent.YMax
+ del extent
+
+ arcpy.AddMessage(f"\t{region_name} Extent:\n\t\tX_Min: {X_Min}\n\t\tY_Min: {Y_Min}\n\t\tX_Max: {X_Max}\n\t\tY_Max: {Y_Max}\n")
+
+ # A list of coordinate pairs
+ pointList = [[X_Min, Y_Min], [X_Min, Y_Max], [X_Max, Y_Max]]
+ # Create an empty Point object
+ point = arcpy.Point()
+ # A list to hold the PointGeometry objects
+ pointGeometryList = []
+ # For each coordinate pair, populate the Point object and create a new
+ # PointGeometry object
+ for pt in pointList:
+ point.X = pt[0]
+ point.Y = pt[1]
+ pointGeometry = arcpy.PointGeometry(point, arcpy.Describe(process_region).spatialReference)
+ pointGeometryList.append(pointGeometry)
+ del pt, pointGeometry
+ # Delete after last use
+ del pointList, point
+
+ # Create a copy of the PointGeometry objects, by using pointGeometryList as
+ # input to the CopyFeatures tool.
+ arcpy.management.CopyFeatures(pointGeometryList, rf"{region_gdb}\{region_extent_points}")
+ arcpy.AddMessage("\tCopy Features to {0}:\n\t\t{1}\n".format(region_extent_points, arcpy.GetMessages(0).replace("\n", "\n\t\t")))
+
+ del pointGeometryList
+
+ # arcpy.AddMessage("tmp_region_extent_points")
+ # tmp_region_extent_points = rf"{region_gdb}\{region_extent_points}"
+ # arcpy.AddMessage(f"Spatial Reference: {str(arcpy.Describe(tmp_region_extent_points).spatialReference.name)}")
+ # arcpy.AddMessage(f"Extent: {str(arcpy.Describe(tmp_region_extent_points).extent).replace(' NaN', '')}")
+ # arcpy.AddMessage(f"Output Coordinate System: {arcpy.env.outputCoordinateSystem.name}")
+ # arcpy.AddMessage(f"Geographic Transformations: {arcpy.env.geographicTransformations}")
+ # del tmp_region_extent_points
+
+ with arcpy.EnvManager(outputCoordinateSystem = psr):
+ arcpy.management.AddXY(in_features = rf"{region_gdb}\{region_extent_points}")
+ arcpy.AddMessage("\tAdd XY:\n\t\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t\t")))
+
+ arcpy.management.AlterField(
+ in_table = rf"{region_gdb}\{region_extent_points}",
+ field = "POINT_X",
+ new_field_name = "Easting",
+ new_field_alias = "Easting",
+ field_type = "",
+ field_length = None,
+ field_is_nullable = "NULLABLE",
+ clear_field_alias = "DO_NOT_CLEAR"
+ )
+ arcpy.AddMessage("\tAlter Field:\n\t\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t\t")))
+
+ arcpy.management.AlterField(
+ in_table = rf"{region_gdb}\{region_extent_points}",
+ field = "POINT_Y",
+ new_field_name = "Northing",
+ new_field_alias = "Northing",
+ field_type = "",
+ field_length = None,
+ field_is_nullable = "NULLABLE",
+ clear_field_alias = "DO_NOT_CLEAR"
+ )
+ arcpy.AddMessage("\tAlter Field:\n\t\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t\t")))
+
+ tmp_outputCoordinateSystem = arcpy.env.outputCoordinateSystem
+ arcpy.env.outputCoordinateSystem = gsr
+
+ with arcpy.EnvManager(outputCoordinateSystem = gsr, geographicTransformations = dismap_tools.check_transformation(rf"{region_gdb}\{region_extent_points}", gsr)):
+ arcpy.management.AddXY(in_features = rf"{region_gdb}\{region_extent_points}")
+ arcpy.AddMessage("\tAdd XY:\n\t\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t\t")))
+
+ arcpy.env.outputCoordinateSystem = tmp_outputCoordinateSystem
+ del tmp_outputCoordinateSystem
+
+ arcpy.management.AlterField(
+ in_table = rf"{region_gdb}\{region_extent_points}",
+ field = "POINT_X",
+ new_field_name = "Longitude",
+ new_field_alias = "Longitude",
+ field_type = "",
+ field_length = None,
+ field_is_nullable = "NULLABLE",
+ clear_field_alias = "DO_NOT_CLEAR"
+ )
+ arcpy.AddMessage("\tAlter Field:\n\t\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t\t")))
+
+ arcpy.management.AlterField(
+ in_table = rf"{region_gdb}\{region_extent_points}",
+ field = "POINT_Y",
+ new_field_name = "Latitude",
+ new_field_alias = "Latitude",
+ field_type = "",
+ field_length = None,
+ field_is_nullable = "NULLABLE",
+ clear_field_alias = "DO_NOT_CLEAR"
+ )
+ arcpy.AddMessage("\tAlter Field:\n\t\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t\t")))
+
+ # Creating Fishnet
+ arcpy.AddMessage(f"Creating Fishnet: {region_fishnet}")
+ arcpy.AddMessage(f"\tCreate Fishnet for {region_name} with {cell_size} by {cell_size} cells")
+ arcpy.management.CreateFishnet(
+ os.path.join(rf"{region_gdb}\{region_fishnet}"),
+ f"{X_Min} {Y_Min}",
+ f"{X_Min} {Y_Max}",
+ cell_size,
+ cell_size,
+ None,
+ None,
+ f"{X_Max} {Y_Max}",
+ "NO_LABELS",
+ "DEFAULT",
+ "POLYGON"
+ )
+ arcpy.AddMessage("\tCreate Fishnet for {0}:\n\t\t{1}\n".format(f"{region_name}", arcpy.GetMessages(0).replace("\n", "\n\t\t")))
+
+ del X_Min, Y_Min, X_Max, Y_Max
+
+ arcpy.management.MakeFeatureLayer(rf"{region_gdb}\{region_fishnet}", f"{region_name}_Fishnet_Layer")
+ arcpy.AddMessage("\tMake Feature Layer for {0}:\n\t\t{1}\n".format(f"{region_fishnet}", arcpy.GetMessages(0).replace("\n", "\n\t\t")))
+ arcpy.AddMessage(f"\t\tRecord Count: {int(arcpy.management.GetCount(f'{region_name}_Fishnet_Layer')[0]):,d}")
+
+ arcpy.management.SelectLayerByLocation(f"{region_name}_Fishnet_Layer", "WITHIN_A_DISTANCE", process_region, 2 * int(cell_size), "NEW_SELECTION", "INVERT")
+ arcpy.AddMessage("\tSelect Layer By Location:\n\t\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t\t")))
+ arcpy.AddMessage(f"\t\tRecord Count: {int(arcpy.management.GetCount(f'{region_name}_Fishnet_Layer')[0]):,d}")
+
+ arcpy.management.DeleteFeatures(f"{region_name}_Fishnet_Layer")
+ arcpy.AddMessage("\tDelete Features:\n\t\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t\t")))
+
+ arcpy.management.Delete(f"{region_name}_Fishnet_Layer")
+ arcpy.AddMessage("\tDelete {0}:\n\t\t{1}\n".format(f"{region_name}_Fishnet_Layer", arcpy.GetMessages(0).replace("\n", "\n\t\t")))
+
+ # Creating Lat-Long
+ arcpy.AddMessage(f"Creating Lat-Long: {region_lat_long}")
+ arcpy.management.FeatureToPoint(rf"{region_gdb}\{region_fishnet}", rf"{region_gdb}\{region_lat_long}", "CENTROID")
+ arcpy.AddMessage("\tFeature To Point:\n\t\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t\t")))
+
+ # Execute DeleteField
+ arcpy.management.DeleteField(rf"{region_gdb}\{region_lat_long}", ['ORIG_FID'])
+ arcpy.AddMessage("\tDelete Field:\n\t\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t\t")))
+
+ with arcpy.EnvManager(outputCoordinateSystem = psr):
+ arcpy.management.AddXY(in_features=rf"{region_gdb}\{region_lat_long}")
+ arcpy.AddMessage("\tAdd XY:\n\t\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t\t")))
+
+ arcpy.management.AlterField(
+ in_table = rf"{region_gdb}\{region_lat_long}",
+ field = "POINT_X",
+ new_field_name = "Easting",
+ new_field_alias = "Easting",
+ field_type = "",
+ field_length = None,
+ field_is_nullable = "NULLABLE",
+ clear_field_alias = "DO_NOT_CLEAR"
+ )
+ arcpy.AddMessage("\tAlter Field:\n\t\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t\t")))
+
+ arcpy.management.AlterField(
+ in_table = rf"{region_gdb}\{region_lat_long}",
+ field = "POINT_Y",
+ new_field_name = "Northing",
+ new_field_alias = "Northing",
+ field_type = "",
+ field_length = None,
+ field_is_nullable = "NULLABLE",
+ clear_field_alias = "DO_NOT_CLEAR"
+ )
+ arcpy.AddMessage("\tAlter Field:\n\t\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t\t")))
+
+ with arcpy.EnvManager(outputCoordinateSystem = gsr, geographicTransformations = dismap_tools.check_transformation(rf"{region_gdb}\{region_extent_points}", gsr)):
+ arcpy.management.AddXY(in_features=rf"{region_gdb}\{region_lat_long}")
+ arcpy.AddMessage("\tAdd XY:\n\t\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t\t")))
+
+ arcpy.management.AlterField(
+ in_table = rf"{region_gdb}\{region_lat_long}",
+ field = "POINT_X",
+ new_field_name = "Longitude",
+ new_field_alias = "Longitude",
+ field_type = "",
+ field_length = None,
+ field_is_nullable = "NULLABLE",
+ clear_field_alias = "DO_NOT_CLEAR"
+ )
+ arcpy.AddMessage("\tAlter Field:\n\t\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t\t")))
+
+ arcpy.management.AlterField(
+ in_table = rf"{region_gdb}\{region_lat_long}",
+ field = "POINT_Y",
+ new_field_name = "Latitude",
+ new_field_alias = "Latitude",
+ field_type = "",
+ field_length = None,
+ field_is_nullable = "NULLABLE",
+ clear_field_alias = "DO_NOT_CLEAR"
+ )
+ arcpy.AddMessage("\tAlter Field:\n\t\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t\t")))
+
+ # arcpy.management.CalculateFields(
+ # in_table = rf"{region_gdb}\{region_lat_long}",
+ # expression_type = "PYTHON3",
+ # fields = "Easting 'round(!Easting!, 8)' #;Northing 'round(!Northing!, 8)' #;Longitude 'round(!Longitude!, 8)' #;Latitude 'round(!Latitude!, 8)' #",
+ # code_block = "",
+ # enforce_domains = "NO_ENFORCE_DOMAINS"
+ # )
+ # arcpy.AddMessage("\tCalculate Fields:\n\t\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t\t")))
+
+ arcpy.AddMessage(f"Generating {table_name} Latitude and Longitude Rasters")
+
+ # arcpy.env.cellSize = cell_size
+ # arcpy.env.extent = arcpy.Describe(rf"{region_gdb}\{region_raster_mask}").extent
+ # arcpy.env.mask = rf"{region_gdb}\{region_raster_mask}"
+ # arcpy.env.snapRaster = rf"{region_gdb}\{region_raster_mask}"
+
+ raster_mask_extent = arcpy.Describe(rf"{region_gdb}\{region_raster_mask}").extent
+
+ arcpy.AddMessage(f"Point to Raster Conversion using {region_lat_long} to create {region_longitude}")
+
+ region_longitude_tmp = rf"{region_gdb}\tmp_{region_longitude}"
+
+ with arcpy.EnvManager(scratchWorkspace=scratch_workspace, workspace = region_gdb, cellSize = cell_size, extent = raster_mask_extent, mask = rf"{region_gdb}\{region_raster_mask}", snapRaster = rf"{region_gdb}\{region_raster_mask}"):
+ arcpy.conversion.PointToRaster(rf"{region_gdb}\{region_lat_long}", "Longitude", region_longitude_tmp, "MOST_FREQUENT", "NONE", cell_size)
+ arcpy.AddMessage("\tPoint To Raster:\n\t\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t\t")))
+
+ arcpy.AddMessage(f"Extract by Mask to create {region_longitude}")
+
+ with arcpy.EnvManager(scratchWorkspace=scratch_workspace, workspace = region_gdb,
+ cellSize = cell_size, extent = raster_mask_extent,
+ mask = rf"{region_gdb}\{region_raster_mask}",
+ snapRaster = rf"{region_gdb}\{region_raster_mask}"):
+ # Execute ExtractByMask
+ outExtractByMask = arcpy.sa.ExtractByMask(region_longitude_tmp, rf"{region_gdb}\{region_raster_mask}", "INSIDE")
+ arcpy.AddMessage("\tExtract By Mask:\n\t\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t\t")))
+ # Save the output
+ outExtractByMask.save(rf"{region_gdb}\{region_longitude}")
+ del outExtractByMask
+
+ arcpy.management.Delete(region_longitude_tmp)
+ del region_longitude_tmp
+
+ region_latitude_tmp = rf"{region_gdb}\tmp_{region_latitude}"
+
+ arcpy.AddMessage(f"Point to Raster Conversion using {region_lat_long} to create {region_latitude}")
+
+ with arcpy.EnvManager(scratchWorkspace=scratch_workspace, workspace = region_gdb,
+ cellSize = cell_size, extent = raster_mask_extent,
+ mask = rf"{region_gdb}\{region_raster_mask}",
+ snapRaster = rf"{region_gdb}\{region_raster_mask}"):
+ # Process: Point to Raster Latitude
+ arcpy.conversion.PointToRaster(rf"{region_gdb}\{region_lat_long}", "Latitude", region_latitude_tmp, "MOST_FREQUENT", "NONE", cell_size, "BUILD")
+ arcpy.AddMessage("\tPoint To Raster:\n\t\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t\t")))
+
+ arcpy.AddMessage(f"Extract by Mask to create {region_latitude}")
+
+ with arcpy.EnvManager(scratchWorkspace=scratch_workspace, workspace = region_gdb, cellSize = cell_size, extent = raster_mask_extent, mask = rf"{region_gdb}\{region_raster_mask}", snapRaster = rf"{region_gdb}\{region_raster_mask}"):
+ # Execute ExtractByMask
+ outExtractByMask = arcpy.sa.ExtractByMask(region_latitude_tmp, rf"{region_gdb}\{region_raster_mask}", "INSIDE")
+ arcpy.AddMessage("\tExtract By Mask:\n\t\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t\t")))
+ # Save the output
+ outExtractByMask.save(rf"{region_gdb}\{region_latitude}")
+ del outExtractByMask
+
+ arcpy.management.Delete(region_latitude_tmp)
+ del region_latitude_tmp
+
+ del raster_mask_extent
+
+ arcpy.ClearEnvironment("cellSize")
+ arcpy.ClearEnvironment("extent")
+ arcpy.ClearEnvironment("mask")
+ arcpy.ClearEnvironment("snapRaster")
+
+ # Reset environment settings to default settings.
+ arcpy.ResetEnvironments()
+
+ arcpy.AddMessage(f"\t\tAlter Fields for: '{region_raster_mask}'")
+ #dismap_tools.alter_fields(csv_data_folder, rf"{region_gdb}\{region_raster_mask}")
+ dismap_tools.import_metadata(csv_data_folder, dataset = rf"{region_gdb}\{region_raster_mask}")
+
+ # Create Metadata
+ dataset_md = md.Metadata(region_raster_mask)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ del dataset_md
+
+ arcpy.AddMessage(f"\t\tAlter Fields for: '{region_extent_points}'")
+ dismap_tools.alter_fields(csv_data_folder, rf"{region_gdb}\{region_extent_points}")
+ dismap_tools.import_metadata(csv_data_folder, dataset = rf"{region_gdb}\{region_extent_points}")
+
+ # Create Metadata
+ dataset_md = md.Metadata(region_extent_points)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ del dataset_md
+
+ arcpy.AddMessage(f"\t\tAlter Fields for: '{region_fishnet}'")
+ dismap_tools.alter_fields(csv_data_folder, rf"{region_gdb}\{region_fishnet}")
+ dismap_tools.import_metadata(csv_data_folder, dataset = rf"{region_gdb}\{region_fishnet}")
+
+ # Create Metadata
+ dataset_md = md.Metadata(region_fishnet)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ del dataset_md
+
+ arcpy.AddMessage(f"\t\tAlter Fields for: '{region_lat_long}'")
+ dismap_tools.alter_fields(csv_data_folder, rf"{region_gdb}\{region_lat_long}")
+ dismap_tools.import_metadata(csv_data_folder, dataset = rf"{region_gdb}\{region_lat_long}")
+
+ # Create Metadata
+ dataset_md = md.Metadata(region_lat_long)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ del dataset_md
+
+ arcpy.AddMessage(f"\t\tAlter Fields for: '{region_latitude}'")
+ dismap_tools.import_metadata(csv_data_folder, dataset = rf"{region_gdb}\{region_latitude}")
+
+ # Create Metadata
+ dataset_md = md.Metadata(region_latitude)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ del dataset_md
+
+ arcpy.AddMessage(f"\t\tAlter Fields for: '{region_longitude}'")
+ dismap_tools.import_metadata(csv_data_folder, dataset = rf"{region_gdb}\{region_longitude}")
+
+ # Create Metadata
+ dataset_md = md.Metadata(region_longitude)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ del dataset_md
+
+ arcpy.management.Delete(process_region)
+ arcpy.management.Delete(rf"{region_gdb}\Datasets")
+
+ del process_region, region_raster_mask, region_extent_points, region_fishnet
+ del region_lat_long, region_latitude, region_longitude
+ del psr, gsr
+ del cell_size
+
+ arcpy.AddMessage(f"Compacting the {os.path.basename(region_gdb)} GDB")
+ arcpy.management.Compact(region_gdb)
+ arcpy.AddMessage("\t"+arcpy.GetMessages(0).replace("\n", "\n\t"))
+
+ # End of business logic for the worker function
+ arcpy.AddMessage(f"Processing for: {table_name} complete")
+
+ # Declared Variables
+ del region_name, table_name
+ del scratch_workspace, csv_data_folder
+ # Imports
+ del dismap_tools, md
+ # Function parameter
+ del region_gdb
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except SystemExit as se:
+ arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
+ sys.exit()
+ except Exception as e:
+ arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+ finally:
+ pass
+
+def script_tool(project_gdb=""):
+ try:
+ import dismap_tools
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+ # Set basic arcpy.env variables
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ # Set varaibales
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{project_folder}\Scratch"
+ del project_folder
+
+ # Clear Scratch Folder
+ dismap_tools.clear_folder(folder=scratch_folder)
+
+ # Create project scratch workspace, if missing
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ if not arcpy.Exists(scratch_folder):
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", "scratch")
+
+ # Set worker parameters
+ #table_name = "AI_IDW"
+ #table_name = "GMEX_IDW"
+ #table_name = "HI_IDW"
+ #table_name = "SEUS_FAL_IDW"
+ table_name = "NBS_IDW"
+
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
+
+ if not arcpy.Exists(scratch_workspace):
+ os.makedirs(os.path.join(scratch_folder, table_name))
+ if not arcpy.Exists(scratch_workspace):
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, f"{table_name}"), "scratch")
+ del scratch_workspace
+
+ # Setup worker workspace and copy data
+ #datasets = [ros.path.join(project_gdb, "Datasets") os.path.join(project_gdb, f"{table_name}_Region")]
+ #if not any(arcpy.management.GetCount(d)[0] == 0 for d in datasets):
+
+ if not arcpy.Exists(os.path.join(scratch_folder, f"{table_name}.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ else:
+ pass
+ arcpy.management.Copy(os.path.join(project_gdb, "Datasets"), rf"{region_gdb}\Datasets")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.management.Copy(os.path.join(project_gdb, f"{table_name}_Region"), rf"{region_gdb}\{table_name}_Region")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ #else:
+ # arcpy.AddWarning(f"One or more datasets contains zero records!!")
+ # for d in datasets:
+ # arcpy.AddMessage(f"\t{os.path.basename(d)} has {arcpy.management.GetCount(d)[0]} records")
+ # del d
+ # sys.exit()
+ #if "datasets" in locals().keys(): del datasets
+
+ try:
+ pass
+ worker(region_gdb=region_gdb)
+ except SystemExit:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+
+ # Declared Varaiables
+ del region_gdb, table_name, scratch_folder
+ # Imports
+ del dismap_tools
+ # Function Parameters
+ del project_gdb
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ hours, rem = divmod(end_time-start_time, 3600)
+ minutes, seconds = divmod(rem, 60)
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"End Time: {strftime('%a %b %d %I:%M %p', localtime(end_time))}")
+ arcpy.AddMessage(f"Elapsed Time {int(hours):0>2}:{int(minutes):0>2}:{seconds:05.2f} (H:M:S)")
+ arcpy.AddMessage(f"{'-' * 80}")
+ del hours, rem, minutes, seconds
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except SystemExit as se:
+ arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
+ sys.exit()
+ except Exception as e:
+ arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+ finally:
+ pass
+
+if __name__ == '__main__':
+ try:
+ project_gdb = arcpy.GetParameterAsText(0)
+ if not project_gdb:
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
+ else:
+ pass
+ script_tool(project_gdb)
+ arcpy.SetParameterAsText(1, "Result")
+ del project_gdb
+ except: # noqa: E722
+ traceback.print_exc()
+ else:
+ pass
+ finally:
+ pass
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_region_sample_locations_director.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_region_sample_locations_director.py
new file mode 100644
index 0000000..8d1587a
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_region_sample_locations_director.py
@@ -0,0 +1,355 @@
+# -*- coding: utf-8 -*-
+#-------------------------------------------------------------------------------
+# Name: module1
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 25/02/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+#-------------------------------------------------------------------------------
+import os
+import sys
+import traceback
+
+import arcpy # third-parties second
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def director(project_gdb="", Sequential=True, table_names=[]):
+ try:
+ # Imports
+ import dismap_tools
+ from create_region_sample_locations_worker import worker
+ from arcpy import metadata as md
+ # Test if passed workspace exists, if not sys.exit()
+ if not arcpy.Exists(project_gdb):
+ sys.exit()(f"{os.path.basename(project_gdb)} is missing!!")
+
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = os.path.join(project_folder, "Scratch")
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
+ csv_data_folder = rf"{project_folder}\CSV Data"
+
+ # Clear Scratch Folder
+ dismap_tools.clear_folder(folder=scratch_folder)
+
+ # Create Scratch Workspace for Project
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ if not arcpy.Exists(scratch_folder):
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", "scratch")
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ del project_folder, scratch_workspace
+
+ if not table_names:
+ table_names = [row[0] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"), "TableName", where_clause = "TableName LIKE '%_IDW'")]
+ else:
+ pass
+
+ # Pre Processing
+ for table_name in table_names:
+ arcpy.AddMessage(f"Pre-Processing: {table_name}")
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ region_scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
+ # Create Scratch Workspace for Region
+ if not arcpy.Exists(region_scratch_workspace):
+ os.makedirs(os.path.join(scratch_folder, table_name))
+ if not arcpy.Exists(region_scratch_workspace):
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, f"{table_name}"), "scratch")
+ del region_scratch_workspace
+ #datasets = [ros.path.join(project_gdb, "Datasets") os.path.join(project_gdb, f"{table_name}_Region")]
+ #if not any(arcpy.management.GetCount(d)[0] == 0 for d in datasets):
+
+ if not arcpy.Exists(os.path.join(scratch_folder, f"{table_name}.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ else:
+ pass
+
+ arcpy.management.Copy(os.path.join(project_gdb, "Datasets"), rf"{region_gdb}\Datasets")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.management.Copy(os.path.join(project_gdb, f"{table_name}_Region"), rf"{region_gdb}\{table_name}_Region")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ del region_gdb, table_name
+
+ # Sequential Processing
+ if Sequential:
+ arcpy.AddMessage("Sequential Processing")
+ for i in range(0, len(table_names)):
+ arcpy.AddMessage(f"Processing: {table_names[i]}")
+ table_name = table_names[i]
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ try:
+ worker(region_gdb=region_gdb)
+ except: # noqa: E722
+ traceback.print_exc()
+ sys.exit()
+ del region_gdb, table_name
+ del i
+ else:
+ pass
+
+ # Non-Sequential Processing
+ if not Sequential:
+ arcpy.AddMessage("Non-Sequential Processing")
+ # Imports
+ import multiprocessing
+ from time import time, localtime, strftime, sleep, gmtime
+ arcpy.AddMessage("Start multiprocessing using the ArcGIS Pro pythonw.exe.")
+ #Set multiprocessing exe in case we're running as an embedded process, i.e ArcGIS
+ #get_install_path() uses a registry query to figure out 64bit python exe if available
+ multiprocessing.set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
+ # Get CPU count and then take 2 away for other process
+ _processes = multiprocessing.cpu_count() - 2
+ _processes = _processes if len(table_names) >= _processes else len(table_names)
+ arcpy.AddMessage(f"Creating the multiprocessing Pool with {_processes} processes")
+ #Create a pool of workers, keep one cpu free for surfing the net.
+ #Let each worker process only handle 1 task before being restarted (in case of nasty memory leaks)
+ with multiprocessing.Pool(processes=_processes, maxtasksperchild=1) as pool:
+ arcpy.AddMessage("\tPrepare arguments for processing")
+ # Use apply_async so we can handle exceptions gracefully
+ jobs={}
+ for i in range(0, len(table_names)):
+ try:
+ arcpy.AddMessage(f"Processing: {table_names[i]}")
+ table_name = table_names[i]
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ jobs[table_name] = pool.apply_async(worker, [region_gdb])
+ del table_name, region_gdb
+ except: # noqa: E722
+ pool.terminate()
+ traceback.print_exc()
+ sys.exit()
+ del i
+ all_finished = False
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ result_completed = {}
+ while True:
+ all_finished = True
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ arcpy.AddMessage(f"\nStart Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage("Have the workers finished?")
+
+ finish_time = strftime('%a %b %d %I:%M %p', localtime())
+ time_elapsed = u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time)))
+ arcpy.AddMessage(f"It's {finish_time}\n{time_elapsed}")
+ finish_time = f"{finish_time}.\n\t{time_elapsed}"
+ del time_elapsed
+ for table_name, result in jobs.items():
+ if result.ready():
+ if table_name not in result_completed:
+ result_completed[table_name] = finish_time
+ try:
+ # wait for and get the result from the task
+ result.get()
+ except SystemExit:
+ pool.terminate()
+ traceback.print_exc()
+ sys.exit()
+ else:
+ pass
+ arcpy.AddMessage(f"Process {table_name}\n\tFinished on {result_completed[table_name]}")
+ else:
+ all_finished = False
+ arcpy.AddMessage(f"Process {table_name} is running. . .")
+ del table_name, result
+ del elapse_time, end_time, finish_time
+ if all_finished:
+ break
+ sleep(_processes * 7.5)
+ del result_completed
+ del start_time
+ del all_finished
+ arcpy.AddMessage("\tClose the process pool")
+ # close the process pool
+ pool.close()
+ # wait for all tasks to complete and processes to close
+ arcpy.AddMessage("\tWait for all tasks to complete and processes to close")
+ pool.join()
+ # Just in case
+ pool.terminate()
+ del pool
+ del jobs
+ del _processes
+ del time, multiprocessing, localtime, strftime, sleep, gmtime
+ arcpy.AddMessage("\tDone with multiprocessing Pool")
+
+ # Post-Processing
+ arcpy.AddMessage("Post-Processing Begins")
+ arcpy.AddMessage("Processing Results")
+ datasets = list()
+ walk = arcpy.da.Walk(scratch_folder, datatype=["Table", "FeatureClass"])
+ for dirpath, dirnames, filenames in walk:
+ for filename in filenames:
+ datasets.append(os.path.join(dirpath, filename))
+ del filename
+ del dirpath, dirnames, filenames
+ del walk
+ for dataset in datasets:
+ datasets_short_path = f".. {'/'.join(dataset.split(os.sep)[-4:])}"
+ dataset_name = os.path.basename(dataset)
+ region_gdb = os.path.dirname(dataset)
+ arcpy.AddMessage(f"\tDataset: '{dataset_name}'")
+ arcpy.AddMessage(f"\t\tPath: '{datasets_short_path}'")
+ arcpy.AddMessage(f"\t\tRegion GDB: '{os.path.basename(region_gdb)}'")
+ arcpy.management.Copy(dataset, rf"{project_gdb}\{dataset_name}")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ #arcpy.management.Delete(dataset)
+ #arcpy.AddMessage(f"\t\tAlter Fields for: '{dataset}'")
+ #dismap_tools.alter_fields(csv_data_folder, rf"{project_gdb}\{dataset}")
+ del region_gdb, dataset_name, datasets_short_path
+ del dataset
+ del datasets
+ arcpy.AddMessage(f"Compacting the {os.path.basename(project_gdb)} GDB")
+ arcpy.management.Compact(project_gdb)
+ arcpy.AddMessage("\t"+arcpy.GetMessages(0).replace("\n", "\n\t"))
+ # Declared Variables assigned in function
+ del scratch_folder, csv_data_folder
+ # Imports
+ del dismap_tools, worker, md
+ # Function Parameters
+ del project_gdb, Sequential, table_names
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def script_tool(project_gdb=""):
+ try:
+ # Imports
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+ #arcpy.AddMessage(project_gdb)
+ # Test if passed workspace exists, if not sys.exit()
+ if not arcpy.Exists(project_gdb):
+ arcpy.AddError(f"{os.path.basename(project_gdb)} is missing!!")
+ sys.exit()
+ else:
+ pass
+
+ try:
+ # "AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW", "NEUS_FAL_IDW", "NEUS_SPR_IDW",
+ # "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",
+ Test = False
+ if Test:
+ director(project_gdb=project_gdb, Sequential=True, table_names=["GMEX_IDW"])
+ #director(project_gdb=project_gdb, Sequential=True, table_names=["NBS_IDW", "HI_IDW"])
+ elif not Test:
+ director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW",])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",])
+ else:
+ pass
+ del Test
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+
+ # Declared Variables
+ # Imports
+ # Function Parameters
+ del project_gdb
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ hours, rem = divmod(end_time-start_time, 3600)
+ minutes, seconds = divmod(rem, 60)
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"End Time: {strftime('%a %b %d %I:%M %p', localtime(end_time))}")
+ arcpy.AddMessage(f"Elapsed Time {int(hours):0>2}:{int(minutes):0>2}:{seconds:05.2f} (H:M:S)")
+ arcpy.AddMessage(f"{'-' * 80}")
+ del hours, rem, minutes, seconds
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == '__main__':
+ try:
+ project_gdb = arcpy.GetParameterAsText(0)
+ if not project_gdb:
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
+ else:
+ pass
+ script_tool(project_gdb)
+ arcpy.SetParameterAsText(1, "Result")
+ del project_gdb
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_region_sample_locations_worker.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_region_sample_locations_worker.py
new file mode 100644
index 0000000..3ba78d4
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_region_sample_locations_worker.py
@@ -0,0 +1,650 @@
+# -*- coding: utf-8 -*-
+#-------------------------------------------------------------------------------
+# Name: module1
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 29/02/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+#-------------------------------------------------------------------------------
+import os
+import sys
+import traceback
+import inspect
+
+import arcpy # third-parties second
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def worker(region_gdb=""):
+ try:
+ # Import the dismap_tools module to access tools
+ import dismap_tools
+ # Import
+ from arcpy import metadata as md
+ import pandas as pd
+ import numpy as np
+ import warnings
+
+ # Use all of the cores on the machine
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.env.overwriteOutput = True
+
+ # Set History and Metadata logs, set serverity and message levelarcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic workkpace variables
+ table_name = os.path.basename(region_gdb).replace(".gdb", "")
+ scratch_folder = os.path.dirname(region_gdb)
+ project_folder = os.path.dirname(scratch_folder)
+ csv_data_folder = rf"{project_folder}\CSV_Data"
+ process_table = rf"{csv_data_folder}\{table_name}.csv"
+ scratch_workspace = os.path.join(scratch_folder, "scratch.gdb")
+ del scratch_folder
+
+ # Set basic workkpace variables
+ arcpy.env.workspace = region_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+
+ field_csv_dtypes = dismap_tools.dTypesCSV(csv_data_folder, table_name)
+ field_gdb_dtypes = dismap_tools.dTypesGDB(csv_data_folder, table_name)
+
+ #print(field_csv_dtypes)
+ #print(field_gdb_dtypes)
+ #sys.exit()
+
+ # DatasetCode, CSVFile, TransformUnit, TableName, GeographicArea, CellSize,
+ # PointFeatureType, FeatureClassName, Region, Season, DateCode, Status,
+ # DistributionProjectCode, DistributionProjectName, SummaryProduct,
+ # FilterRegion, FilterSubRegion, FeatureServiceName, FeatureServiceTitle,
+ # MosaicName, MosaicTitle, ImageServiceName, ImageServiceTitle
+
+ # Get values for table_name from Datasets table
+ fields = ["TableName", "GeographicArea", "DatasetCode", "Region", "Season", "DistributionProjectCode"]
+ region_list = [row for row in arcpy.da.SearchCursor(rf"{region_gdb}\Datasets", fields, where_clause = f"TableName = '{table_name}'")][0]
+ del table_name
+
+ # Assigning variables from items in the chosen table list
+ # ['AI_IDW', 'AI_IDW_Region', 'AI', 'Aleutian Islands', None, 'IDW']
+ table_name = region_list[0]
+ geographic_area = region_list[1]
+ datasetcode = region_list[2]
+ region = region_list[3]
+ season = region_list[4]
+ distri_code = region_list[5]
+ del region_list
+
+ # Start of business logic for the worker function
+ arcpy.AddMessage(f"Reading {table_name} CSV File\n")
+
+ pd.set_option("display.max_columns", None)
+ pd.set_option("display.max_colwidth", 10)
+ pd.set_option("display.min_rows", 2)
+ pd.set_option("display.max_rows", 5)
+ pd.set_option("display.expand_frame_repr", False)
+
+ encoding, index_column = dismap_tools.get_encoding_index_col(process_table)
+
+ with warnings.catch_warnings():
+ warnings.simplefilter(action='ignore', category=FutureWarning)
+ # DataFrame
+ df = pd.read_csv(
+ process_table,
+ index_col = index_column,
+ encoding = encoding,
+ delimiter = ",",
+ dtype = field_csv_dtypes,
+ )
+ del encoding, index_column
+
+ # Rename columns using the dictionary below and the defined list of field names
+ # Easting,Northing,year,depth_m,median_est,mean_est,est5,est95,spp_sci,spp_common
+ # mean_est, est5, est95
+ column_names = {
+ "common" : "CommonName",
+ "depth" : "Depth",
+ "depth_m" : "Depth",
+ "DistributionProjectName" : "DistributionProjectName",
+ "est5" : "Estimate5",
+ "est95" : "Estimate95",
+ "haulid" : "SampleID",
+ "lat" : "Latitude",
+ "lat_UTM" : "Northing",
+ "lon" : "Longitude",
+ "lon_UTM" : "Easting",
+ "mean_est" : "MeanEstimate",
+ "median_est" : "MedianEstimate",
+ "region" : "Region",
+ "sampleid" : "SampleID",
+ "spp" : "Species",
+ "spp_common" : "CommonName",
+ "spp_sci" : "Species",
+ "stratum" : "Stratum",
+ "stratumarea" : "StratumArea",
+ "transformed" : "MapValue",
+ "wtcpue" : "WTCPUE",
+ "year" : "Year",
+ "CoreSpecies" : "CoreSpecies"
+ }
+
+ df.rename(columns=column_names, inplace=True)
+ del column_names
+
+ # Print column names
+ #for column in list(df.columns): arcpy.AddMessage(column); del column # print columns
+
+ # ###--->>>
+ arcpy.AddMessage("Inserting additional columns into the dataframe\n")
+
+ arcpy.AddMessage(f"\tInserting 'DatasetCode' column into: {table_name}")
+ df.insert(0, "DatasetCode", datasetcode)
+ del datasetcode
+
+ arcpy.AddMessage(f"\tInserting 'Region' column into: {table_name}")
+ if "Region" not in list(df.columns):
+ df.insert(df.columns.get_loc("DatasetCode")+1, "Region", f"{region}")
+
+ arcpy.AddMessage(f"\tInserting 'StdTime' column into: {table_name}")
+ if "StdTime" not in list(df.columns):
+ df.insert(df.columns.get_loc("Year")+1, "StdTime", pd.to_datetime(df["Year"], format="%Y").dt.tz_localize('Etc/GMT+12'))
+
+ arcpy.AddMessage(f"\tInserting 'MapValue' column into: {table_name}")
+ if "MapValue" not in list(df.columns):
+ df.insert(df.columns.get_loc("WTCPUE")+1, "MapValue", np.nan)
+ #-->> MapValue
+ arcpy.AddMessage("\tCalculating the MapValue values")
+ df["MapValue"] = df["WTCPUE"].pow((1.0/3.0))
+
+ arcpy.AddMessage(f"\tInserting 'SpeciesCommonName' column into: {table_name}")
+ if "SpeciesCommonName" not in list(df.columns):
+ df.insert(df.columns.get_loc("CommonName")+1, "SpeciesCommonName", "")
+
+ arcpy.AddMessage(f"\tInserting 'CommonNameSpecies' column into: {table_name}")
+ if "CommonNameSpecies" not in list(df.columns):
+ df.insert(df.columns.get_loc("SpeciesCommonName")+1, "CommonNameSpecies", "")
+
+ # Test if 'IDW' in table name
+ #if "IDW" in table_name:
+ arcpy.AddMessage(f"\tInserting 'Season' {season} column into: {table_name}")
+ if "Season" not in list(df.columns):
+ df.insert(df.columns.get_loc("Region")+1, "Season", season if season is not None else "")
+
+ arcpy.AddMessage(f"\tInserting 'SummaryProduct' column into: {table_name}")
+ if "SummaryProduct" not in list(df.columns):
+ df.insert(df.columns.get_loc("Season")+1, "SummaryProduct", "Yes")
+
+ arcpy.AddMessage(f"\tInserting 'TransformUnit' column into: {table_name}")
+ if "TransformUnit" not in list(df.columns):
+ df.insert(df.columns.get_loc("MapValue")+1, "TransformUnit", "cuberoot")
+
+ arcpy.AddMessage(f"\tInserting 'CoreSpecies' column into: {table_name}")
+ if "CoreSpecies" not in list(df.columns):
+ df.insert(df.columns.get_loc("CommonNameSpecies")+1, "CoreSpecies", "No")
+
+ arcpy.AddMessage(f"\tCalculate Null for 'StratumArea' column into: {table_name}")
+ if "StratumArea" in list(df.columns):
+ #df["StratumArea"].fillna(np.nan, inplace = True)
+ df["StratumArea"] = df["StratumArea"].fillna(np.nan)
+
+ arcpy.AddMessage(f"\tCalculate Null for 'DistributionProjectName' column into: {table_name}")
+ if "DistributionProjectName" in list(df.columns):
+ #df["DistributionProjectName"].fillna(np.nan, inplace = True)
+ df["DistributionProjectName"] = df["DistributionProjectName"].fillna(np.nan)
+
+ arcpy.AddMessage(f"\tCalculate Null for 'WTCPUE' column into: {table_name}")
+ if "WTCPUE" in list(df.columns):
+ df["WTCPUE"] = df["WTCPUE"].fillna(np.nan)
+
+ arcpy.AddMessage(f"\tCalculate Null for 'Latitude' column into: {table_name}")
+ if "Latitude" in list(df.columns):
+ df["Latitude"] = df["Latitude"].fillna(np.nan)
+
+ arcpy.AddMessage(f"\tCalculate Null for 'Longitude' column into: {table_name}")
+ if "Longitude" in list(df.columns):
+ df["Longitude"] = df["Longitude"].fillna(np.nan)
+
+ arcpy.AddMessage(f"\tCalculate Null for 'Depth' column into: {table_name}")
+ if "Depth" in list(df.columns):
+ df["Depth"] = df["Depth"].fillna(np.nan)
+
+ del region, season
+
+ # ###--->>>
+ #arcpy.AddMessage(f"Updating and calculating new values for some columns\n")
+ #-->> DistributionProjectName
+ arcpy.AddMessage("\tSetting 'NaN' in 'DistributionProjectName' to ''")
+ #df.loc[df['DistributionProjectName'] == 'nan', 'DistributionProjectName'] = ""
+ df["DistributionProjectName"] = df["DistributionProjectName"].fillna("")
+
+ #-->> CommonName
+ arcpy.AddMessage("\tSetting 'NaN' in 'CommonName' to ''")
+ #df.loc[df['CommonName'] == 'nan', 'CommonName'] = ""
+ df["CommonName"] = df["CommonName"].fillna("")
+
+ arcpy.AddMessage("\tSetting 'CommonName' unicode'")
+ # Cast text as Unicode in the CommonName field
+ df["CommonName"] = df["CommonName"].astype("unicode")
+
+ #-->> SpeciesCommonName
+ arcpy.AddMessage("\tCalculating SpeciesCommonName and setting it to 'Species (CommonName)'")
+ df["SpeciesCommonName"] = np.where(df["CommonName"] != "", df["Species"] + ' (' + df["CommonName"] + ')', "")
+
+ #-->> CommonNameSpecies
+ arcpy.AddMessage("\tCalculating CommonNameSpecies and setting it to 'CommonName (Species)'")
+ df["CommonNameSpecies"] = np.where(df["CommonName"] != "", df["CommonName"] + ' (' + df["Species"] + ')', "")
+
+ arcpy.AddMessage("\tReplacing Infinity values with Nulls")
+ # Replace Inf with Nulls
+ # For some cell values in the 'WTCPUE' column, there is an Inf
+ # value representing an infinit
+ df.replace([np.inf, -np.inf], np.nan, inplace=True)
+
+ # Left justify the column names
+ #df.columns = pd.Index([col.ljust(10) for col in df.columns])
+
+ table_definition = dismap_tools.table_definitions(csv_data_folder, table_name)
+ #arcpy.AddMessage(table_definition)
+
+ # altering the DataFrame
+ df = df[table_definition]
+ del table_definition
+
+ #raise SystemExist(f"Line Number: {traceback.extract_stack()[-1].lineno}")
+
+ pd.set_option("display.max_colwidth", 12)
+
+ # Change Table Style
+ df.style.set_table_styles([{'selector': 'td', 'props': 'white-space: nowrap !important;'}])
+
+ arcpy.AddMessage(f"\nDataframe report:\n{df.head(5)}\n")
+
+ arcpy.AddMessage("Converting the Dataframe to an NumPy Array\n")
+ try:
+ array = np.array(np.rec.fromrecords(df.values), dtype = field_gdb_dtypes)
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+
+ del field_gdb_dtypes
+ del field_csv_dtypes
+
+ del df # delete dataframe
+ # Imports
+ del pd, np
+
+ # Temporary table
+ #tmp_table = f"memory\{table_name.lower()}_tmp"
+ tmp_table = rf"{region_gdb}\{table_name.lower()}_tmp"
+ try:
+ arcpy.da.NumPyArrayToTable(array, tmp_table)
+ del array
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+
+ desc = arcpy.da.Describe(tmp_table)
+ fields = [f.name for f in desc["fields"] if f.type == "String"]
+ #fields = ["Season", "Species", "CommonName", "SpeciesCommonName", "CommonNameSpecies", "Stratum"]
+ oid = desc["OIDFieldName"]
+ # Use SQL TOP to sort field values
+ arcpy.AddMessage(f"{', '.join(fields)}")
+ for row in arcpy.da.SearchCursor(tmp_table, fields, f"{oid} <= 5"):
+ arcpy.AddMessage(row)
+ del row
+ del desc, fields, oid
+
+ out_table = rf"{region_gdb}\{table_name}"
+ #out_table = rf"{region_gdb}\{table_name}_TABLE"
+
+ arcpy.AddMessage(f"Copying the {table_name} Table from memory to the GDB")
+ arcpy.management.CopyRows(tmp_table, out_table, "")
+ arcpy.AddMessage("Copy Rows: \t{0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # Remove the temporary table
+ arcpy.management.Delete(tmp_table)
+ del tmp_table
+
+ process_table_md = md.Metadata(process_table)
+ out_table_md = md.Metadata(out_table)
+ out_table_md.copy(process_table_md)
+ out_table_md.save()
+ out_table_md.synchronize("OVERWRITE")
+ out_table_md.save()
+ out_table_md.synchronize("ALWAYS")
+ out_table_md.save()
+ del out_table_md
+ del process_table_md
+
+ del process_table # delete passed variables
+
+ # Test if 'IDW' in region name
+ #if distri_code == "IDW":
+ # # Calculate Core Species
+ # dismap_tools.calculate_core_species(out_table)
+
+ #arcpy.conversion.ExportTable(in_table = out_table, out_table = f"{csv_data_folder}\_{table_name}.csv", where_clause="", use_field_alias_as_name = "NOT_USE_ALIAS")
+ #arcpy.AddMessage("Export Table: \t{0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.AddMessage(f"Creating the {table_name} Sample Locations Dataset")
+
+ # Set the output coordinate system to what is needed for the
+ # DisMAP project
+ #geographic_area_sr = os.path.join(f"{project_folder}", "Dataset_Shapefiles", f"{table_name}", f"{geographic_area}.prj")
+ #psr = arcpy.SpatialReference(geographic_area_sr); del geographic_area_sr
+ #arcpy.env.outputCoordinateSystem = psr
+ psr = arcpy.Describe(rf"{region_gdb}\{table_name}_Region").spatialReference
+
+ arcpy.env.outputCoordinateSystem = psr
+
+ out_features = ""
+
+ #if distri_code == "IDW":
+
+ # 4326 - World Geodetic System 1984 (WGS 84)
+ gsr = arcpy.SpatialReference(4326)
+ gsr_wkt = gsr.exportToString()
+ psr_wkt = psr.exportToString()
+ transformation = dismap_tools.get_transformation(gsr_wkt, psr_wkt)
+ arcpy.env.geographicTransformations = transformation
+ del gsr_wkt, psr_wkt
+ del transformation
+
+ arcpy.AddMessage("\tMake XY Event layer for IDW datasets")
+ # Set the output coordinate system to what is needed for the
+ # DisMAP project
+ #gsr = "GEOGCS['GCS_WGS_1984',DATUM['D_WGS_1984',SPHEROID['WGS_1984',6378137.0,298.257223563]],PRIMEM['Greenwich',0.0],UNIT['Degree',0.0174532925199433]]"
+ x_coord, y_coord = 'Longitude', 'Latitude'
+ xy_events = arcpy.management.MakeXYEventLayer(out_table, x_coord, y_coord, "xy_events", gsr, "#")
+ del x_coord, y_coord
+
+ out_features = rf"{region_gdb}\{table_name}_Sample_Locations"
+
+ with arcpy.EnvManager(scratchWorkspace = scratch_workspace, workspace = region_gdb):
+ arcpy.conversion.ExportFeatures(in_features = xy_events,
+ out_features = out_features,
+ where_clause = "",
+ use_field_alias_as_name = "",
+ field_mapping = "",
+ sort_field = "")
+ # Clear the XY Event Layer from memory.
+ arcpy.management.Delete("xy_events")
+ del xy_events, psr
+
+ #arcpy.AddMessage(f"\tXY TableToNumPyArray to Feature Class")
+ #fields = [f.name for f in arcpy.ListFields(out_table) if f.type not in ["Geometry", "OID"]]
+ #arr = arcpy.da.TableToNumPyArray(out_table, fields)
+ #arcpy.da.NumPyArrayToFeatureClass(arr, out_features, ('Longitude', 'Latitude'), gsr)
+ #del arr, fields
+
+ del gsr
+
+ #elif distri_code != "IDW":
+ #
+ # x_field, y_field = 'Easting', 'Northing'
+ # out_features = rf"{region_gdb}\{table_name}_GRID_Points"
+ #
+ # arcpy.AddMessage(f"\tXY Table to Feature Class")
+ # arcpy.management.XYTableToPoint(out_table, out_features, x_field, y_field, "#", psr)
+ # del x_field, y_field
+
+ del geographic_area
+ del distri_code
+
+ if arcpy.Exists(out_features):
+ arcpy.AddMessage(f"Adding field index in the {table_name} Point Locations Dataset")
+
+ # Add Attribute Index
+ arcpy.management.AddIndex(out_features, ['Species', 'CommonName', 'SpeciesCommonName', 'Year'], f"{table_name}_SampleLocationsSpeciesIndex", "NON_UNIQUE", "NON_ASCENDING")
+
+ # Get the count of records for selected species
+ getcount = arcpy.management.GetCount(out_features)[0]
+ arcpy.AddMessage(f"\t{os.path.basename(out_features)} has {getcount} records")
+ del getcount
+ else:
+ pass
+
+ out_table_md = md.Metadata(out_table)
+ out_features_md = md.Metadata(out_features)
+ out_features_md.copy(out_table_md)
+ out_features_md.save()
+ out_features_md.synchronize("OVERWRITE")
+ out_features_md.save()
+ out_features_md.synchronize("ALWAYS")
+ out_features_md.save()
+ del out_features_md
+ del out_table_md
+
+ arcpy.AddMessage(f"\t\tAlter Fields for: '{os.path.basename(out_features)}'")
+ dismap_tools.alter_fields(csv_data_folder, out_features)
+
+ dataset_md = md.Metadata(out_features)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ del dataset_md
+
+ arcpy.AddMessage(f"\t\tAlter Fields for: '{os.path.basename(out_table)}'")
+ dismap_tools.alter_fields(csv_data_folder, out_table)
+
+ dataset_md = md.Metadata(out_table)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ del dataset_md
+
+ arcpy.management.Delete(rf"{region_gdb}\{table_name}_Boundary")
+ arcpy.management.Delete(rf"{region_gdb}\{table_name}_Region")
+ arcpy.management.Delete(rf"{region_gdb}\Datasets")
+
+ # Declared Variables
+ del out_table, out_features
+ del table_name, project_folder, scratch_workspace, csv_data_folder
+ # Imports
+ del warnings, md
+ del dismap_tools
+ # Function parameter
+ del region_gdb
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except SystemExit as se:
+ arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
+ sys.exit()
+ except Exception as e:
+ arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+ finally:
+ pass
+
+def script_tool(project_gdb=""):
+ try:
+ import dismap_tools
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+ # Imports
+ # Set basic arcpy.env variables
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ # Set varaibales
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{project_folder}\Scratch"
+ del project_folder
+
+ # Clear Image Folder
+ ClearImageFolder = True
+ if ClearImageFolder:
+ dismap_tools.clear_folder(folder=scratch_folder)
+ else:
+ pass
+ del ClearImageFolder
+
+ # Test if passed workspace exists, if not sys.exit()
+ if not arcpy.Exists(project_gdb):
+ sys.exit()(f"{os.path.basename(project_gdb)} is missing!!")
+
+ # Create project scratch workspace, if missing
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ if not arcpy.Exists(scratch_folder):
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", "scratch")
+
+ # Set worker parameters
+ table_name = "AI_IDW"
+ #table_name = "HI_IDW"
+ #table_name = "NBS_IDW"
+ #table_name = "SEUS_SPR_IDW"
+ #table_name = "GMEX_IDW"
+ #table_name = "ENBS_IDW"
+
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
+
+ # Create worker scratch workspace, if missing
+ if not arcpy.Exists(scratch_workspace):
+ os.makedirs(os.path.join(scratch_folder, table_name))
+ if not arcpy.Exists(scratch_workspace):
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, f"{table_name}"), "scratch")
+ del scratch_workspace
+
+ # Setup worker workspace and copy data
+ #datasets = [ros.path.join(project_gdb, "Datasets") os.path.join(project_gdb, f"{table_name}_Region")]
+ #if not any(arcpy.management.GetCount(d)[0] == 0 for d in datasets):
+ if not arcpy.Exists(os.path.join(scratch_folder, f"{table_name}.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ else:
+ pass
+ arcpy.management.Copy(os.path.join(project_gdb, "Datasets"), rf"{region_gdb}\Datasets")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.management.Copy(os.path.join(project_gdb, f"{table_name}_Region"), rf"{region_gdb}\{table_name}_Region")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ #else:
+ # arcpy.AddWarning(f"One or more datasets contains zero records!!")
+ # for d in datasets:
+ # arcpy.AddMessage(f"\t{os.path.basename(d)} has {arcpy.management.GetCount(d)[0]} records")
+ # del d
+ # se = f"SystemExit at line number: '{traceback.extract_stack()[-1].lineno}'"
+ # sys.exit()(se)
+ #if "datasets" in locals().keys(): del datasets
+
+ try:
+ pass
+ worker(region_gdb=region_gdb)
+ except SystemExit:
+ arcpy.AddMessage(arcpy.GetMessages())
+ sys.exit()
+ #arcpy.AddMessage(f"caught SystemExit in '{inspect.stack()[0][3]}'")
+
+
+ # Declared Varaiables
+ del region_gdb, table_name, scratch_folder
+ # Imports
+ del dismap_tools
+ # Function Parameters
+ del project_gdb
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ hours, rem = divmod(end_time-start_time, 3600)
+ minutes, seconds = divmod(rem, 60)
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"End Time: {strftime('%a %b %d %I:%M %p', localtime(end_time))}")
+ arcpy.AddMessage(f"Elapsed Time {int(hours):0>2}:{int(minutes):0>2}:{seconds:05.2f} (H:M:S)")
+ arcpy.AddMessage(f"{'-' * 80}")
+ del hours, rem, minutes, seconds
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == '__main__':
+ try:
+ project_gdb = arcpy.GetParameterAsText(0)
+ if not project_gdb:
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
+ else:
+ pass
+ script_tool(project_gdb)
+ arcpy.SetParameterAsText(1, "Result")
+ del project_gdb
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_regions_from_shapefiles_director.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_regions_from_shapefiles_director.py
new file mode 100644
index 0000000..4acc894
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_regions_from_shapefiles_director.py
@@ -0,0 +1,461 @@
+# -*- coding: utf-8 -*-
+#-------------------------------------------------------------------------------
+# Name: module1
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 03/03/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+#-------------------------------------------------------------------------------
+import os
+import sys
+import traceback
+import inspect
+
+import arcpy # third-parties second
+
+def create_dismap_regions(project_gdb=""):
+ try:
+ import dismap_tools
+
+ project_folder = os.path.dirname(project_gdb)
+ csv_data_folder = os.path.join(project_folder, "CSV_Data")
+
+ arcpy.env.overwriteOutput = True
+
+ if arcpy.Exists(os.path.join(project_gdb, "DisMAP_Regions")):
+ arcpy.management.Delete(os.path.join(project_gdb, "DisMAP_Regions"))
+
+ arcpy.AddMessage("Creating: 'DisMAP_Regions'")
+ # Execute Tool
+ # Spatial Reference factory code of 4326 is : GCS_WGS_1984
+ # Spatial Reference factory code of 5714 is : Mean Sea Level (Height)
+ # sr = arcpy.SpatialReference(4326, 5714)
+ sp_ref = arcpy.SpatialReference('WGS_1984_Web_Mercator_Auxiliary_Sphere')
+ arcpy.management.CreateFeatureclass(
+ out_path = project_gdb,
+ out_name = "DisMAP_Regions",
+ geometry_type = "POLYLINE",
+ template = "",
+ has_m = "DISABLED",
+ has_z = "DISABLED",
+ spatial_reference = sp_ref,
+ config_keyword = "",
+ spatial_grid_1 = "0",
+ spatial_grid_2 = "0",
+ spatial_grid_3 = "0"
+ )
+ arcpy.AddMessage("\tCreate Featureclass: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del sp_ref
+ dismap_tools.add_fields(csv_data_folder, os.path.join(project_gdb, "DisMAP_Regions"))
+ dismap_tools.import_metadata(csv_data_folder, os.path.join(project_gdb, "DisMAP_Regions"))
+
+ # Imports
+ del dismap_tools
+ # Function Parameter
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except SystemExit as se:
+ arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
+ sys.exit()
+ except Exception as e:
+ arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+ finally:
+ arcpy.management.ClearWorkspaceCache()
+
+def director(project_gdb="", Sequential=True, table_names=[]):
+ try:
+ # Imports
+ from arcpy import metadata as md
+
+ # Imports
+ import dismap_tools
+ from create_regions_from_shapefiles_worker import worker
+
+ arcpy.env.overwriteOutput = True
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = os.path.join(project_folder, "Scratch")
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
+ csv_data_folder = rf"{project_folder}\CSV_Data"
+
+ # Clear Scratch Folder
+ dismap_tools.clear_folder(folder=scratch_folder)
+
+ # Create Scratch Workspace for Project
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ if not arcpy.Exists(scratch_folder):
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", "scratch")
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ del project_folder, scratch_workspace
+
+ create_dismap_regions(project_gdb)
+
+ if not table_names:
+ table_names = [row[0] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"),
+ "TableName",
+ where_clause = "TableName LIKE '%_IDW'")]
+ else:
+ pass
+
+ # Pre Processing
+ for table_name in table_names:
+ arcpy.AddMessage(f"Pre-Processing: {table_name}")
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ region_scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
+ # Create Scratch Workspace for Region
+ if not arcpy.Exists(region_scratch_workspace):
+ os.makedirs(os.path.join(scratch_folder, table_name))
+ if not arcpy.Exists(region_scratch_workspace):
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, f"{table_name}"), "scratch")
+ del region_scratch_workspace
+
+ #datasets = [ros.path.join(project_gdb, "Datasets"]
+ #if not any(arcpy.management.GetCount(d)[0] == 0 for d in datasets):
+ if not arcpy.Exists(os.path.join(scratch_folder, f"{table_name}.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ else:
+ pass
+
+ arcpy.management.Copy(os.path.join(project_gdb, "Datasets"), rf"{region_gdb}\Datasets")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.management.CreateFeatureclass(rf"{region_gdb}", "DisMAP_Regions", "POLYLINE", os.path.join(project_gdb, "DisMAP_Regions"))
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ dismap_regions_md = md.Metadata(os.path.join(project_gdb, "DisMAP_Regions"))
+ dataset_md = md.Metadata(rf"{region_gdb}\DisMAP_Regions")
+ dataset_md.copy(dismap_regions_md)
+ dataset_md.save()
+ dataset_md.synchronize("OVERWRITE")
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ del dataset_md, dismap_regions_md
+ #else:
+ # arcpy.AddWarning(f"One or more datasets contains zero records!!")
+ # for d in datasets:
+ # arcpy.AddMessage(f"\t{os.path.basename(d)} has {arcpy.management.GetCount(d)[0]} records")
+ # del d
+ # se = f"SystemExit at line number: '{traceback.extract_stack()[-1].lineno}'"
+ # sys.exit()(se)
+ #if "datasets" in locals().keys(): del datasets
+ del region_gdb
+ del table_name
+
+ # Sequential Processing
+ if Sequential:
+ arcpy.AddMessage("Sequential Processing")
+ for i in range(0, len(table_names)):
+ arcpy.AddMessage(f"Processing: {table_names[i]}")
+ table_name = table_names[i]
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ try:
+ worker(region_gdb=region_gdb)
+ except SystemExit:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ del region_gdb, table_name
+ del i
+ else:
+ pass
+
+ # Non-Sequential Processing
+ if not Sequential:
+ arcpy.AddMessage("Non-Sequential Processing")
+ # Imports
+ import multiprocessing
+ from time import time, localtime, strftime, sleep, gmtime
+ arcpy.AddMessage("Start multiprocessing using the ArcGIS Pro pythonw.exe.")
+ #Set multiprocessing exe in case we're running as an embedded process, i.e ArcGIS
+ #get_install_path() uses a registry query to figure out 64bit python exe if available
+ multiprocessing.set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
+ # Get CPU count and then take 2 away for other process
+ _processes = multiprocessing.cpu_count() - 2
+ _processes = _processes if len(table_names) >= _processes else len(table_names)
+ arcpy.AddMessage(f"Creating the multiprocessing Pool with {_processes} processes")
+ #Create a pool of workers, keep one cpu free for surfing the net.
+ #Let each worker process only handle 1 task before being restarted (in case of nasty memory leaks)
+ with multiprocessing.Pool(processes=_processes, maxtasksperchild=1) as pool:
+ arcpy.AddMessage("\tPrepare arguments for processing")
+ # Use apply_async so we can handle exceptions gracefully
+ jobs={}
+ for i in range(0, len(table_names)):
+ try:
+ arcpy.AddMessage(f"Processing: {table_names[i]}")
+ table_name = table_names[i]
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ jobs[table_name] = pool.apply_async(worker, [region_gdb])
+ del table_name, region_gdb
+ except: # noqa: E722
+ pool.terminate()
+ traceback.print_exc()
+ sys.exit()
+ del i
+ all_finished = False
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ result_completed = {}
+ while True:
+ all_finished = True
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ arcpy.AddMessage(f"\nStart Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage("Have the workers finished?")
+ finish_time = strftime('%a %b %d %I:%M %p', localtime())
+ time_elapsed = u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time)))
+ arcpy.AddMessage(f"It's {finish_time}\n{time_elapsed}")
+ finish_time = f"{finish_time}.\n\t{time_elapsed}"
+ del time_elapsed
+ for table_name, result in jobs.items():
+ if result.ready():
+ if table_name not in result_completed:
+ result_completed[table_name] = finish_time
+ try:
+ # wait for and get the result from the task
+ result.get()
+ except SystemExit:
+ pool.terminate()
+ traceback.print_exc()
+ sys.exit()
+ else:
+ pass
+ arcpy.AddMessage(f"Process {table_name}\n\tFinished on {result_completed[table_name]}")
+ else:
+ all_finished = False
+ arcpy.AddMessage(f"Process {table_name} is running. . .")
+ del table_name, result
+ del elapse_time, end_time, finish_time
+ if all_finished:
+ break
+ sleep(_processes * 7.5)
+ del result_completed
+ del start_time
+ del all_finished
+ arcpy.AddMessage("\tClose the process pool")
+ # close the process pool
+ pool.close()
+ # wait for all tasks to complete and processes to close
+ arcpy.AddMessage("\tWait for all tasks to complete and processes to close")
+ pool.join()
+ # Just in case
+ pool.terminate()
+ del pool
+ del jobs
+ del _processes
+ del time, multiprocessing, localtime, strftime, sleep, gmtime
+ arcpy.AddMessage("\tDone with multiprocessing Pool")
+
+ # Post-Processing
+ arcpy.AddMessage("Post-Processing Begins")
+ arcpy.AddMessage("Processing Results")
+ datasets = list()
+ walk = arcpy.da.Walk(scratch_folder, datatype="FeatureClass", type=["Polyline", "Polygon"])
+ for dirpath, dirnames, filenames in walk:
+ for filename in filenames:
+ datasets.append(os.path.join(dirpath, filename))
+ del filename
+ del dirpath, dirnames, filenames
+ del walk
+ for dataset in datasets:
+ #print(dataset)
+ datasets_short_path = f".. {'/'.join(dataset.split(os.sep)[-4:])}"
+ dataset_name = os.path.basename(dataset)
+ region_gdb = os.path.dirname(dataset)
+ arcpy.AddMessage(f"\tDataset: '{dataset_name}'")
+ arcpy.AddMessage(f"\t\tPath: '{datasets_short_path}'")
+ arcpy.AddMessage(f"\t\tRegion GDB: '{os.path.basename(region_gdb)}'")
+ arcpy.management.Copy(dataset, rf"{project_gdb}\{dataset_name}")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ if dataset_name.endswith("_Boundary"):
+ arcpy.AddMessage(f"\tAppending the {dataset_name} Dataset to the DisMAP Regions Dataset")
+ # Process: Append
+ arcpy.management.Append(inputs = rf"{project_gdb}\{dataset_name}",
+ target = os.path.join(project_gdb, "DisMAP_Regions"),
+ schema_type = "NO_TEST",
+ field_mapping = "",
+ subtype = "")
+ arcpy.AddMessage("\tAppend: {0} {1}\n".format(os.path.basename(dataset), arcpy.GetMessages(0).replace("\n", '\n\t')))
+ else:
+ pass
+ #arcpy.AddMessage(f"\t\tAlter Fields for: '{dataset_name}'")
+ #dismap_tools.alter_fields(csv_data_folder, rf"{project_gdb}\{dataset_name}")
+ #dismap_tools.import_metadata(dataset=rf"{project_gdb}\{dataset_name}")
+ del region_gdb, dataset_name, datasets_short_path
+ del dataset
+ del datasets
+ arcpy.AddMessage(f"Compacting the {os.path.basename(project_gdb)} GDB")
+ arcpy.management.Compact(project_gdb)
+ arcpy.AddMessage("\t"+arcpy.GetMessages(0).replace("\n", "\n\t"))
+ # Declared Variables
+ del scratch_folder, csv_data_folder
+ # Imports
+ del dismap_tools, worker, md
+ # Function Parameters
+ del project_gdb, Sequential, table_names
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except SystemExit as se:
+ arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
+ sys.exit()
+ except Exception as e:
+ arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+ finally:
+ pass
+
+def script_tool(project_gdb=""):
+ try:
+ # Imports
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+ try:
+ pass
+ # "AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW", "NEUS_FAL_IDW", "NEUS_SPR_IDW",
+ # "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",
+ test = False
+ if test:
+ #director(project_gdb=project_gdb, Sequential=True, table_names=["HI_IDW"])
+ director(project_gdb=project_gdb, Sequential=True, table_names=["SEUS_SPR_IDW", "HI_IDW"])
+ #create_dismap_regions(project_gdb)
+ elif not test:
+ director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW",])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",])
+ del test
+ except: # noqa: E722
+ traceback.print_exc()
+ sys.exit()
+
+ # Declared Varaiables
+ del project_gdb
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ hours, rem = divmod(end_time-start_time, 3600)
+ minutes, seconds = divmod(rem, 60)
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"End Time: {strftime('%a %b %d %I:%M %p', localtime(end_time))}")
+ arcpy.AddMessage(f"Elapsed Time {int(hours):0>2}:{int(minutes):0>2}:{seconds:05.2f} (H:M:S)")
+ arcpy.AddMessage(f"{'-' * 80}")
+ del hours, rem, minutes, seconds
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except SystemExit as se:
+ arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
+ sys.exit()
+ except Exception as e:
+ arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+ finally:
+ pass
+
+if __name__ == '__main__':
+ try:
+ project_gdb = arcpy.GetParameterAsText(0)
+ if not project_gdb:
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
+ else:
+ pass
+ script_tool(project_gdb)
+ arcpy.SetParameterAsText(1, "Result")
+ del project_gdb
+ except: # noqa: E722
+ traceback.print_exc()
+ else:
+ pass
+ finally:
+ pass
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_regions_from_shapefiles_worker.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_regions_from_shapefiles_worker.py
new file mode 100644
index 0000000..61b15d5
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_regions_from_shapefiles_worker.py
@@ -0,0 +1,408 @@
+# -*- coding: utf-8 -*-
+#-------------------------------------------------------------------------------
+# Name: module1
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 03/03/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+#-------------------------------------------------------------------------------
+import os
+import sys
+import traceback
+import inspect
+
+import arcpy # third-parties second
+
+def worker(region_gdb=""):
+ try:
+ # Test if passed workspace exists, if not sys.exit()
+ if not arcpy.Exists(rf"{region_gdb}"):
+ sys.exit()(f"{os.path.basename(region_gdb)} is missing!!")
+
+ # Import the dismap_tools module to access tools
+ import dismap_tools
+ # Imports
+ from arcpy import metadata as md
+
+ # Set History and Metadata logs, set serverity and message level
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic workkpace variables
+ table_name = os.path.basename(region_gdb).replace(".gdb","")
+ scratch_folder = os.path.dirname(region_gdb)
+ project_folder = os.path.dirname(scratch_folder)
+ csv_data_folder = rf"{project_folder}\CSV_Data"
+ scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
+
+ arcpy.AddMessage(f"Table Name: {table_name}\nProject Folder: {os.path.basename(project_folder)}\nScratch Folder: {os.path.basename(scratch_folder)}\n")
+
+ del scratch_folder
+
+ # Set basic workkpace variables
+ arcpy.env.workspace = region_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ del scratch_workspace
+
+ # DatasetCode CSVFile TransformUnit TableName GeographicArea CellSize
+ # PointFeatureType FeatureClassName Region Season DateCode
+ # Status DistributionProjectCode DistributionProjectName
+ # SummaryProduct FilterRegion FilterSubRegion FeatureServiceName
+ # FeatureServiceTitle MosaicName MosaicTitle ImageServiceName, ImageServiceTitle
+
+ fields = ["TableName", "GeographicArea", "DatasetCode", "Region", "Season", "DistributionProjectCode"]
+ region_list = [row for row in arcpy.da.SearchCursor(rf"{region_gdb}\Datasets", fields, where_clause = f"TableName = '{table_name}'")][0]
+ del fields
+
+ # Assigning variables from items in the chosen table list
+ # ['AI_IDW', 'AI_IDW_Region', 'AI', 'Aleutian Islands', None, 'IDW']
+ table_name = region_list[0]
+ geographic_area = region_list[1]
+ datasetcode = region_list[2]
+ region = region_list[3]
+ season = region_list[4] if region_list[4] else ""
+ distri_code = region_list[5]
+
+ del region_list
+
+ arcpy.AddMessage(f"\tTable Name: {table_name}")
+ arcpy.AddMessage(f"\tGeographic Area: {geographic_area}")
+ arcpy.AddMessage(f"\tDataset Code: {datasetcode}")
+ arcpy.AddMessage(f"\tRegion: {region}")
+ arcpy.AddMessage(f"\tSeason: {season}")
+ arcpy.AddMessage(f"\tDistri Code: {distri_code}")
+
+ geographicarea_sr = os.path.join(project_folder, f"Dataset_Shapefiles\\{table_name}\\{geographic_area}.prj")
+ arcpy.AddMessage(geographicarea_sr)
+ datasetcode_sr = arcpy.SpatialReference(geographicarea_sr)
+ del geographicarea_sr
+
+ if datasetcode_sr.linearUnitName == "Kilometer":
+ arcpy.env.cellSize = 1
+ arcpy.env.XYResolution = 0.1
+ arcpy.env.XYTolerance = 1.0
+ elif datasetcode_sr.linearUnitName == "Meter":
+ arcpy.env.cellSize = 1000
+ arcpy.env.XYResolution = 0.0001
+ arcpy.env.XYTolerance = 0.001
+
+ arcpy.AddMessage(f"\t\tCreating Feature Class: {geographic_area}")
+ # Execute Create Feature Class
+ # Use DisMAP regions as a template
+ geographic_area_path = arcpy.management.CreateFeatureclass(
+ out_path = region_gdb,
+ out_name = f"{geographic_area}",
+ geometry_type = "POLYGON",
+ template = rf"{region_gdb}\DisMAP_Regions",
+ has_m = "DISABLED",
+ has_z = "DISABLED",
+ spatial_reference = datasetcode_sr,
+ config_keyword = "",
+ spatial_grid_1 = "0",
+ spatial_grid_2 = "0",
+ spatial_grid_3 = "0"
+ )
+ arcpy.AddMessage("\t\t\t{0}\n".format(arcpy.GetMessages().replace("\n", '\n\t\t\t')))
+ del datasetcode_sr
+ del geographic_area_path
+
+ geographic_area_path = rf"{region_gdb}\{geographic_area}"
+
+ # The shapefile used to create the extent and mask for the environment variable
+ geographic_area_shape_file = rf"{project_folder}\Dataset_Shapefiles\{table_name}\{geographic_area}.shp"
+ geographic_area_boundary = f"{geographic_area.replace('_Region','_Boundary')}"
+ geographic_area_boundary_path = rf"{region_gdb}\{geographic_area.replace('_Region','_Boundary')}"
+
+ dismap_regions_md = md.Metadata(rf"{region_gdb}\DisMAP_Regions")
+ dataset_md = md.Metadata(geographic_area_path)
+ dataset_md.copy(dismap_regions_md)
+ dataset_md.save()
+ dataset_md.synchronize("OVERWRITE")
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ del dataset_md, dismap_regions_md
+
+ arcpy.AddMessage(f'\t\tCopy {geographic_area} Shape File.')
+
+ arcpy.AddMessage(f"\t\tAppend: {geographic_area}")
+ arcpy.management.Append(inputs = geographic_area_shape_file, target = geographic_area_path, schema_type = "NO_TEST")
+ arcpy.AddMessage("\t\t\t{0}\n".format(arcpy.GetMessages().replace("\n", '\n\t\t\t')))
+
+ arcpy.AddMessage(f"\t\tCalculate Fields for: {geographic_area}")
+ arcpy.management.CalculateFields(geographic_area_path, "PYTHON3",
+ [
+ ["DatasetCode", f'"{datasetcode}"'],
+ ["Region", f'"{region}"'],
+ ["Season", f'"{season}"'],
+ ["DistributionProjectCode", f'"{distri_code}"'],
+ ],
+ )
+ arcpy.AddMessage("\t\t\tCalculate Fields: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t\t\t')))
+
+ arcpy.AddMessage(f"\t\tFeature to Line to create: {geographic_area_boundary}")
+ arcpy.management.FeatureToLine(in_features = geographic_area_path, out_feature_class = geographic_area_boundary_path, cluster_tolerance="", attributes="ATTRIBUTES")
+ arcpy.AddMessage("\t\t\t{0}\n".format(arcpy.GetMessages().replace("\n", '\n\t\t\t')))
+
+ arcpy.AddMessage(f"\t\tDeleting fields from table: {geographic_area_boundary}")
+ arcpy.management.DeleteField(in_table = rf"{region_gdb}\{geographic_area_boundary}", drop_field = [f"FID_{geographic_area}"])
+ arcpy.AddMessage("\t\t\t{0}\n".format(arcpy.GetMessages().replace("\n", '\n\t\t\t')))
+
+
+ arcpy.AddMessage(f"\t\tAlter Fields for: '{os.path.basename(geographic_area_path)}'")
+ dismap_tools.alter_fields(csv_data_folder, geographic_area_path)
+
+ dataset_md = md.Metadata(geographic_area_path)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ del dataset_md
+
+ arcpy.AddMessage(f"\t\tAlter Fields for: '{os.path.basename(geographic_area_boundary_path)}'")
+ dismap_tools.alter_fields(csv_data_folder, geographic_area_boundary_path)
+
+ geographic_area_path_md = md.Metadata(geographic_area_path)
+ dataset_md = md.Metadata(geographic_area_boundary_path)
+ dataset_md.copy(geographic_area_path_md)
+ dataset_md.save()
+ dataset_md.synchronize("OVERWRITE")
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ del dataset_md, geographic_area_path_md
+
+ del geographic_area, geographic_area_boundary
+
+ del geographic_area_path, geographic_area_shape_file
+ del geographic_area_boundary_path
+
+ # Remove template
+ arcpy.management.Delete(rf"{region_gdb}\DisMAP_Regions")
+ arcpy.AddMessage("\t"+arcpy.GetMessages(0).replace("\n", "\n\t"))
+
+ arcpy.management.Delete(rf"{region_gdb}\Datasets")
+ arcpy.AddMessage("\t"+arcpy.GetMessages(0).replace("\n", "\n\t"))
+
+ arcpy.management.Compact(region_gdb)
+
+ # Declared Variables for this function only
+ del datasetcode, region, season, distri_code
+ # Basic variables
+ del table_name, project_folder, csv_data_folder
+ # Imports
+ del md, dismap_tools
+ # Function parameter
+ del region_gdb
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except SystemExit as se:
+ arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
+ sys.exit()
+ except Exception as e:
+ arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+ finally:
+ pass
+
+def script_tool(project_gdb=""):
+ try:
+ # Imports
+ import dismap_tools
+ from arcpy import metadata as md
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+ # Set basic arcpy.env variables
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ # Set varaibales
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = os.path.join(project_folder, "Scratch")
+ del project_folder
+
+ # Clear Scratch Folder
+ dismap_tools.clear_folder(folder=scratch_folder)
+
+ # Create project scratch workspace, if missing
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ if not arcpy.Exists(scratch_folder):
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(scratch_folder, "scratch")
+ else:
+ pass
+
+ # Set worker parameters
+ table_name = "AI_IDW"
+ #table_name = "HI_IDW"
+ #table_name = "GMEX_IDW"
+ #table_name = "GOA_IDW"
+ #table_name = "NBS_IDW"
+ #table_name = "SEUS_SPR_IDW"
+
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ scratch_workspace = os.path.join(scratch_folder, f"{table_name}\\scratch.gdb")
+
+ # Create worker scratch workspace, if missing
+ if not arcpy.Exists(scratch_workspace):
+ os.makedirs(os.path.join(scratch_folder, table_name))
+ if not arcpy.Exists(scratch_workspace):
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, table_name), "scratch")
+ del scratch_workspace
+
+## edit = arcpy.da.Editor(region_gdb)
+## arcpy.AddMessage("edit created")
+## edit.startEditing()
+## arcpy.AddMessage("edit started")
+## edit.startOperation()
+## arcpy.AddMessage("operation started")
+
+ # Setup worker workspace and copy data
+ #datasets = [ros.path.join(project_gdb, "Datasets") os.path.join(project_gdb, "DisMAP_Regions")]
+ #if not any(arcpy.management.GetCount(d)[0] == 0 for d in datasets):
+ if not arcpy.Exists(os.path.join(scratch_folder, f"{table_name}.gdb")):
+ arcpy.management.CreateFileGDB(scratch_folder, table_name)
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ else:
+ pass
+
+ arcpy.management.Copy(os.path.join(project_gdb, "Datasets"), rf"{region_gdb}\Datasets")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.management.CreateFeatureclass(rf"{region_gdb}", "DisMAP_Regions", "POLYLINE", os.path.join(project_gdb, "DisMAP_Regions"))
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ dismap_regions_md = md.Metadata(os.path.join(project_gdb, "DisMAP_Regions"))
+ dataset_md = md.Metadata(rf"{region_gdb}\DisMAP_Regions")
+ dataset_md.copy(dismap_regions_md)
+ dataset_md.save()
+ dataset_md.synchronize("OVERWRITE")
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ del dataset_md, dismap_regions_md
+
+ #else:
+ # arcpy.AddWarning(f"One or more datasets contains zero records!!")
+ # for d in datasets:
+ # arcpy.AddMessage(f"\t{os.path.basename(d)} has {arcpy.management.GetCount(d)[0]} records")
+ # del d
+ # sys.exit()
+ #if "datasets" in locals().keys(): del datasets
+
+ try:
+ pass
+ worker(region_gdb=region_gdb)
+ except SystemExit:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+
+ # Declared Varaiables
+ del region_gdb, table_name, scratch_folder
+ # Imports
+ del md, dismap_tools
+
+ # Function Parameters
+ del project_gdb
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ hours, rem = divmod(end_time-start_time, 3600)
+ minutes, seconds = divmod(rem, 60)
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"End Time: {strftime('%a %b %d %I:%M %p', localtime(end_time))}")
+ arcpy.AddMessage(f"Elapsed Time {int(hours):0>2}:{int(minutes):0>2}:{seconds:05.2f} (H:M:S)")
+ arcpy.AddMessage(f"{'-' * 80}")
+ del hours, rem, minutes, seconds
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except SystemExit as se:
+ arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
+ sys.exit()
+ except Exception as e:
+ arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+ finally:
+ pass
+
+if __name__ == '__main__':
+ try:
+ project_gdb = arcpy.GetParameterAsText(0)
+ if not project_gdb:
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
+ else:
+ pass
+ script_tool(project_gdb)
+ arcpy.SetParameterAsText(1, "Result")
+ del project_gdb
+ except: # noqa: E722
+ traceback.print_exc()
+ else:
+ pass
+ finally:
+ pass
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_species_richness_rasters_director.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_species_richness_rasters_director.py
new file mode 100644
index 0000000..a512b1b
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_species_richness_rasters_director.py
@@ -0,0 +1,407 @@
+# -*- coding: utf-8 -*-
+#-------------------------------------------------------------------------------
+# Name: create_species_year_image_name_table_director
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 09/03/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+#-------------------------------------------------------------------------------
+import os
+import sys
+import traceback
+
+import arcpy # third-parties second
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ #filename = sys.path[0] + os.sep + f"{os.path.basename(__file__)}"
+ filename = os.path.basename(__file__)
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def preprocessing(project_gdb="", table_names="", clear_folder=True):
+ try:
+ import dismap_tools
+
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic arcpy.env variables
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ # Set varaibales
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{project_folder}\Scratch"
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
+
+ # Clear Scratch Folder
+ #ClearScratchFolder = True
+ #if ClearScratchFolder:
+ if clear_folder:
+ dismap_tools.clear_folder(folder=rf"{os.path.dirname(project_gdb)}\Scratch")
+ else:
+ pass
+ #del ClearScratchFolder
+ del clear_folder
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ del project_folder, scratch_workspace
+
+ if not table_names:
+ table_names = [row[0] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"),
+ "TableName",
+ where_clause = "TableName LIKE '%_IDW'")]
+ else:
+ pass
+
+ for table_name in table_names:
+ arcpy.AddMessage(f"Pre-Processing: {table_name}")
+
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ region_scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
+
+ # Create Scratch Workspace for Region
+ if not arcpy.Exists(region_scratch_workspace):
+ os.makedirs(os.path.join(scratch_folder, table_name))
+ if not arcpy.Exists(region_scratch_workspace):
+ arcpy.AddMessage(f"Create File GDB: '{table_name}'")
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, f"{table_name}"), "scratch")
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del region_scratch_workspace
+ # # # CreateFileGDB
+ arcpy.AddMessage(f"Creating File GDB: '{table_name}'")
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # # # CreateFileGDB
+ # # # Datasets
+ # Process: Make Table View (Make Table View) (management)
+ datasets = rf'{project_gdb}\Datasets'
+ arcpy.AddMessage(f"'{os.path.basename(datasets)}' has {arcpy.management.GetCount(datasets)[0]} records")
+
+ table_name_view = "Dataset Table View"
+ arcpy.management.MakeTableView(in_table = datasets,
+ out_view = table_name_view,
+ where_clause = f"TableName = '{table_name}'"
+ )
+ arcpy.AddMessage(f"The table '{table_name_view}' has {arcpy.management.GetCount(table_name_view)[0]} records")
+ arcpy.management.CopyRows(table_name_view, rf"{region_gdb}\Datasets")
+ arcpy.AddMessage("\tCopy Rows: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.management.Delete(table_name_view)
+ del table_name_view
+ # # # Datasets
+ # # # LayerSpeciesYearImageName
+ #arcpy.AddMessage(f"The table '{table_name}_LayerSpeciesYearImageName' has {arcpy.management.GetCount(table_name_view)[0]} records")
+ arcpy.management.Copy(rf"{project_gdb}\{table_name}_LayerSpeciesYearImageName", rf"{region_gdb}\{table_name}_LayerSpeciesYearImageName")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # # # LayerSpeciesYearImageName
+ # # # Raster_Mask
+ arcpy.management.Copy(rf"{project_gdb}\{table_name}_Raster_Mask", rf"{region_gdb}\{table_name}_Raster_Mask")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # # # Raster_Mask
+
+ del datasets #, filter_region, filter_subregion
+ # Leave so we can block the above code
+ # Declared Variables
+ del table_name
+
+ # Declared Variables
+ del scratch_folder, region_gdb
+ # Imports
+ del dismap_tools
+ # Function Parameters
+ del project_gdb, table_names
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def director(project_gdb="", Sequential=True, table_names=[]):
+ try:
+ from create_species_richness_rasters_worker import worker
+
+ # Test if passed workspace exists, if not sys.exit()
+ if not arcpy.Exists(rf"{project_gdb}"):
+ arcpy.AddError(f"{os.path.basename(project_gdb)} is missing!!")
+ arcpy.AddError(arcpy.GetMessages(2))
+ sys.exit()
+ else:
+ pass
+
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ preprocessing(project_gdb=project_gdb, table_names=table_names, clear_folder=True)
+
+## project_folder = os.path.dirname(project_gdb)
+## scratch_folder = rf"{os.path.dirname(project_gdb)}\Scratch"
+## scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
+##
+## arcpy.env.workspace = project_gdb
+## arcpy.env.scratchWorkspace = scratch_workspace
+## del project_folder, scratch_workspace
+
+ # Sequential Processing
+ if Sequential:
+ arcpy.AddMessage("Sequential Processing")
+ for i in range(0, len(table_names)):
+ arcpy.AddMessage(f"Processing: {table_names[i]}")
+ table_name = table_names[i]
+ region_gdb = rf"{os.path.dirname(project_gdb)}\Scratch\{table_name}.gdb"
+ try:
+ pass
+ worker(region_gdb=region_gdb)
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ del region_gdb, table_name
+ del i
+ else:
+ pass
+
+ # Non-Sequential Processing
+ if not Sequential:
+ import multiprocessing
+ from time import time, localtime, strftime, sleep, gmtime
+ arcpy.AddMessage("Start multiprocessing using the ArcGIS Pro pythonw.exe.")
+ #Set multiprocessing exe in case we're running as an embedded process, i.e ArcGIS
+ #get_install_path() uses a registry query to figure out 64bit python exe if available
+ multiprocessing.set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
+ # Get CPU count and then take 2 away for other process
+ _processes = multiprocessing.cpu_count() - 2
+ _processes = _processes if len(table_names) >= _processes else len(table_names)
+ arcpy.AddMessage(f"Creating the multiprocessing Pool with {_processes} processes")
+ #Create a pool of workers, keep one cpu free for surfing the net.
+ #Let each worker process only handle 1 task before being restarted (in case of nasty memory leaks)
+ with multiprocessing.Pool(processes=_processes, maxtasksperchild=1) as pool:
+ arcpy.AddMessage("\tPrepare arguments for processing")
+ # Use apply_async so we can handle exceptions gracefully
+ jobs={}
+ for i in range(0, len(table_names)):
+ try:
+ arcpy.AddMessage(f"Processing: {table_names[i]}")
+ table_name = table_names[i]
+ region_gdb = rf"{os.path.dirname(project_gdb)}\Scratch\{table_name}.gdb"
+ jobs[table_name] = pool.apply_async(worker, [region_gdb])
+ del table_name, region_gdb
+ except: # noqa: E722
+ pool.terminate()
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ del i
+ all_finished = False
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ result_completed = {}
+ while True:
+ all_finished = True
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ arcpy.AddMessage("\nHave the workers finished?")
+ finish_time = strftime('%a %b %d %I:%M %p', localtime())
+ time_elapsed = u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time)))
+ arcpy.AddMessage(f"It's {finish_time}\n{time_elapsed}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ finish_time = f"{finish_time}.\n\t{time_elapsed}"
+ del time_elapsed
+ for table_name, result in jobs.items():
+ if result.ready():
+ if table_name not in result_completed:
+ result_completed[table_name] = finish_time
+ try:
+ # wait for and get the result from the task
+ result.get()
+ except: # noqa: E722
+ pool.terminate()
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ pass
+ arcpy.AddMessage(f"Process {table_name}\n\tFinished on {result_completed[table_name]}")
+ else:
+ all_finished = False
+ arcpy.AddMessage(f"Process {table_name} is running. . .")
+ del table_name, result
+ del elapse_time, end_time, finish_time
+ if all_finished:
+ break
+ sleep(_processes * 7.5)
+ del result_completed
+ del start_time
+ del all_finished
+ arcpy.AddMessage("\tClose the process pool")
+ # close the process pool
+ pool.close()
+ # wait for all tasks to complete and processes to close
+ arcpy.AddMessage("\tWait for all tasks to complete and processes to close")
+ pool.join()
+ # Just in case
+ pool.terminate()
+ del pool
+ del jobs
+ del _processes
+ del time, multiprocessing, localtime, strftime, sleep, gmtime
+
+ arcpy.AddMessage("\tDone with multiprocessing Pool")
+
+ arcpy.AddMessage(f"Compacting the {os.path.basename(project_gdb)} GDB")
+ arcpy.management.Compact(project_gdb)
+ arcpy.AddMessage("\t"+arcpy.GetMessages(0).replace("\n", "\n\t"))
+
+ # Declared Variables assigned in function
+ #del scratch_folder
+ # Imports
+ del worker
+ # Function Parameters
+ del project_gdb, Sequential, table_names
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def script_tool(project_gdb=""):
+ try:
+ # Imports
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+ try:
+ pass
+ # "AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW", "NEUS_FAL_IDW", "NEUS_SPR_IDW",
+ # "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",
+
+ Test = True
+ if Test:
+ pass
+ #director(project_gdb=project_gdb, Sequential=True, table_names=["SEUS_FAL_IDW"])
+ director(project_gdb=project_gdb, Sequential=True, table_names=["AI_IDW","EBS_IDW","NBS_IDW","NEUS_FAL_IDW",])
+ elif not Test:
+ director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW",])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",])
+ else:
+ pass
+ del Test
+ #except SystemExit:
+ except: # noqa: E722
+ pass
+ #arcpy.AddError(arcpy.GetMessages(2))
+ #traceback.print_exc()
+ #sys.exit()
+
+ # Declared Variables
+
+ # Function
+ del project_gdb
+
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ hours, rem = divmod(end_time-start_time, 3600)
+ minutes, seconds = divmod(rem, 60)
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"End Time: {strftime('%a %b %d %I:%M %p', localtime(end_time))}")
+ arcpy.AddMessage(f"Elapsed Time {int(hours):0>2}:{int(minutes):0>2}:{seconds:05.2f} (H:M:S)")
+ arcpy.AddMessage(f"{'-' * 80}")
+ del hours, rem, minutes, seconds
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == '__main__':
+ try:
+ project_gdb = arcpy.GetParameterAsText(0)
+ if not project_gdb:
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
+ else:
+ pass
+ script_tool(project_gdb)
+ arcpy.SetParameterAsText(1, "Result")
+ del project_gdb
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_species_richness_rasters_worker.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_species_richness_rasters_worker.py
new file mode 100644
index 0000000..f059dc7
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_species_richness_rasters_worker.py
@@ -0,0 +1,460 @@
+# -*- coding: utf-8 -*-
+#-------------------------------------------------------------------------------
+# Name: create_species_year_image_name_table_worker
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 09/03/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+#-------------------------------------------------------------------------------
+import os
+import sys
+import traceback
+
+import arcpy # third-parties second
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ #filename = sys.path[0] + os.sep + f"{os.path.basename(__file__)}"
+ filename = os.path.basename(__file__)
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def print_table(table=""):
+ try:
+ """ Print first 5 rows of a table """
+ desc = arcpy.da.Describe(table)
+ fields = [f.name for f in desc["fields"] if f.type == "String"]
+ #Get OID field
+ oid = desc["OIDFieldName"]
+ # Use SQL TOP to sort field values
+ arcpy.AddMessage(f"{', '.join(fields)}")
+ for row in arcpy.da.SearchCursor(table, fields, f"{oid} <= 5"):
+ arcpy.AddMessage(row)
+ del row
+ del desc, fields, oid
+ del table
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def worker(region_gdb=""):
+ try:
+ # Test if passed workspace exists, if not sys.exit()
+ if not arcpy.Exists(region_gdb):
+ arcpy.AddError(f"{os.path.basename(region_gdb)} is missing!!")
+ sys.exit()
+
+ # Import
+ import numpy as np
+ from arcpy import metadata as md
+ # Import the dismap module to access tools
+ import dismap_tools
+
+ # Set History and Metadata logs, set serverity and message level
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic workkpace variables
+ table_name = os.path.basename(region_gdb).replace(".gdb","")
+ scratch_folder = os.path.dirname(region_gdb)
+ project_folder = os.path.dirname(scratch_folder)
+ scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
+ region_raster_mask = rf"{region_gdb}\{table_name}_Raster_Mask"
+
+ arcpy.AddMessage(f"Table Name: {table_name}\n\tProject Folder: {project_folder}\n\tScratch Folder: {scratch_folder}\n")
+
+ del scratch_folder
+
+ # Set basic workkpace variables
+ arcpy.env.workspace = region_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.env.pyramid = "PYRAMIDS -1 BILINEAR LZ77 NO_SKIP"
+ arcpy.env.resamplingMethod = "BILINEAR"
+ arcpy.env.rasterStatistics = "STATISTICS 1 1"
+
+ arcpy.AddMessage(f"Creating {table_name} Species Richness Rasters")
+
+ arcpy.AddMessage("\tGet list of variables from the 'Datasets' table")
+
+ # DatasetCode, CSVFile, TransformUnit, TableName, GeographicArea, CellSize,
+ # PointFeatureType, FeatureClassName, Region, Season, DateCode, Status,
+ # DistributionProjectCode, DistributionProjectName, SummaryProduct,
+ # FilterRegion, FilterSubRegion, FeatureServiceName, FeatureServiceTitle,
+ # MosaicName, MosaicTitle, ImageServiceName, ImageServiceTitle
+
+ # Get values for table_name from Datasets table
+ fields = ["TableName", "GeographicArea", "DatasetCode", "CellSize"]
+ region_list = [row for row in arcpy.da.SearchCursor(rf"{region_gdb}\Datasets", fields, where_clause = f"TableName = '{table_name}'")][0]
+ del fields
+
+ # Assigning variables from items in the chosen table list
+ # ['AI_IDW', 'AI_IDW_Region', 'AI', 'Aleutian Islands', None, 'IDW']
+ table_name = region_list[0]
+ #geographic_area = region_list[1]
+ datasetcode = region_list[2]
+ cell_size = region_list[3]
+ del region_list
+
+
+ if isinstance(cell_size, type('str')):
+ cell_size = int(cell_size)
+ else:
+ pass
+
+ arcpy.AddMessage(f"\tGet the 'rowCount', 'columnCount', and 'lowerLeft' corner of '{table_name}_Raster_Mask'")
+ # These are used later to set the rows and columns for a zero numpy array
+ rowCount = int(arcpy.management.GetRasterProperties(region_raster_mask, "ROWCOUNT" ).getOutput(0))
+ columnCount = int(arcpy.management.GetRasterProperties(region_raster_mask, "COLUMNCOUNT" ).getOutput(0))
+
+ # Create Raster from Array
+ raster_mask_extent = arcpy.Raster(region_raster_mask)
+ lowerLeft = arcpy.Point(raster_mask_extent.extent.XMin, raster_mask_extent.extent.YMin)
+ del raster_mask_extent
+
+ arcpy.AddMessage("\tSet the 'outputCoordinateSystem' based on the projection information for the geographic region")
+ #geographic_area_sr = rf"{project_folder}\Dataset_Shapefiles\{table_name}\{geographic_area}.prj"
+ #geographic_area_sr = arcpy.Describe(region_raster_mask).spatialReference
+ #geographic_area_sr = rf"{project_folder}\Dataset_Shapefiles\{table_name}\{geographic_area}.prj"
+ # Set the output coordinate system to what is needed for the
+ # DisMAP project
+ arcpy.env.outputCoordinateSystem = arcpy.Describe(region_raster_mask).spatialReference
+ #del geographic_area_sr, geographic_area, psr
+ #del geographic_area
+
+ arcpy.AddMessage("\tGet information for input rasters")
+
+ layerspeciesyearimagename = rf"{region_gdb}\{table_name}_LayerSpeciesYearImageName"
+
+ fields = ['DatasetCode', 'CoreSpecies', 'Year', 'Variable', 'ImageName']
+ input_rasters = {}
+ input_rasters_path = rf"{project_folder}\Images\{table_name}"
+
+ with arcpy.da.SearchCursor(layerspeciesyearimagename, fields, where_clause=f"Variable NOT IN ('Core Species Richness', 'Species Richness') and DatasetCode = '{datasetcode}'") as cursor:
+ for row in cursor:
+ _datasetcode = row[0]
+ _corespecies = row[1]
+ _year = row[2]
+ _variable = row[3]
+ _image = row[4]
+ input_rasters[f"{_image}.tif"] = [_variable, _corespecies, _year, os.path.join(input_rasters_path, _variable, f"{_image}.tif")]
+ del row, _datasetcode, _corespecies, _year, _variable, _image
+ del cursor
+ del input_rasters_path, fields, layerspeciesyearimagename
+
+ #for input_raster in input_rasters:
+ # arcpy.AddMessage(input_raster, input_rasters[input_raster])
+ # del input_raster
+
+ arcpy.AddMessage("\tSet the output and scratch paths")
+
+ # Set species_richness_path
+ species_richness_path = rf"{project_folder}\Images\{table_name}\_Species Richness"
+ species_richness_scratch_path = rf"{project_folder}\Scratch\{table_name}\_Species Richness"
+
+ if not os.path.exists(species_richness_path):
+ os.makedirs(species_richness_path)
+ if not os.path.exists(species_richness_scratch_path):
+ os.makedirs(species_richness_scratch_path)
+
+ years = sorted(list(set([input_rasters[input_raster][2] for input_raster in input_rasters])))
+
+ arcpy.AddMessage("\tProcessing all species")
+
+ for year in years:
+
+ layercode_year_richness = os.path.join(species_richness_path, f"{table_name}_Species_Richness_{year}.tif")
+
+ #if not arcpy.Exists(layercode_year_richness):
+
+ arcpy.AddMessage(f"\t\tProcessing rasters for year: {year}")
+
+ richnessArray = np.zeros((rowCount, columnCount), dtype='float32', order='C')
+
+ rasters = [r for r in input_rasters if input_rasters[r][2] == year]
+
+ # For each raster exported, create the Con mask
+ for raster in rasters:
+ arcpy.AddMessage(f"\t\t\tProcessing the {raster} raster")
+
+ _in_raster = input_rasters[raster][3]
+
+ rasterArray = arcpy.RasterToNumPyArray(_in_raster, nodata_to_value=np.nan)
+
+ rasterArray[rasterArray < 0.0] = np.nan
+
+ rasterArray[rasterArray > 0.0] = 1.0
+
+ #add rasterArray to richnessArray
+ richnessArray = np.add(richnessArray, rasterArray) # Can also use: richnessArray + rasterArray
+ del rasterArray, _in_raster, raster
+
+ arcpy.AddMessage(f"\t\tCreating Species Richness Raster for year: {year}")
+
+ # Cast array as float321
+ richnessArray = richnessArray.astype('float32')
+
+ # Convert Array to Raster
+ with arcpy.EnvManager(scratchWorkspace=species_richness_scratch_path, workspace = species_richness_path):
+ richnessArrayRaster = arcpy.NumPyArrayToRaster(richnessArray, lowerLeft, cell_size, cell_size, -3.40282346639e+38) #-3.40282346639e+38
+ richnessArrayRaster.save(layercode_year_richness)
+ del richnessArrayRaster
+ # Add statitics
+ arcpy.management.CalculateStatistics(layercode_year_richness)
+
+ raster_md = md.Metadata(layercode_year_richness)
+ raster_md.title = os.path.basename(layercode_year_richness).replace("_", " ")
+ raster_md.save()
+ raster_md.synchronize("ALWAYS")
+ raster_md.save()
+ del raster_md
+
+ del richnessArray, rasters
+
+ #else:
+ # arcpy.AddMessage(f"\t\t{os.path.basename(layercode_year_richness)} exists")
+
+ del year, layercode_year_richness
+
+ del years, species_richness_path, species_richness_scratch_path
+
+ # ###--->>>
+
+ arcpy.AddMessage("\tCreating the {table_name} Core Species Richness Rasters")
+
+ # Set core_species_richness_path
+ core_species_richness_path = rf"{project_folder}\Images\{table_name}\_Core Species Richness"
+ core_species_richness_scratch_path = rf"{project_folder}\Scratch\{table_name}\_Core Species Richness"
+
+ if not os.path.exists(core_species_richness_path):
+ os.makedirs(core_species_richness_path)
+ if not os.path.exists(core_species_richness_scratch_path):
+ os.makedirs(core_species_richness_scratch_path)
+
+ years = sorted(list(set([input_rasters[input_raster][2] for input_raster in input_rasters if input_rasters[input_raster][1] == "Yes"])))
+
+ # ###--->>>
+ arcpy.AddMessage("\t\tProcessing Core Species")
+
+ for year in years:
+
+ layercode_year_richness = os.path.join(core_species_richness_path, f"{table_name}_Core_Species_Richness_{year}.tif")
+
+ #if not arcpy.Exists(layercode_year_richness):
+
+ richnessArray = np.zeros((rowCount, columnCount), dtype='float32', order='C')
+
+ rasters = [r for r in input_rasters if input_rasters[r][2] == year and input_rasters[r][1] == "Yes"]
+
+ arcpy.AddMessage("\t\tProcessing rasters")
+
+ # For each raster exported, create the Con mask
+ for raster in rasters:
+ arcpy.AddMessage(f"\t\t\tProcessing {raster} raster")
+
+ _in_raster = input_rasters[raster][3]
+
+ rasterArray = arcpy.RasterToNumPyArray(_in_raster, nodata_to_value=np.nan)
+ rasterArray[rasterArray < 0.0] = np.nan
+
+ rasterArray[rasterArray > 0.0] = 1.0
+
+ #add rasterArray to richnessArray
+ richnessArray = np.add(richnessArray, rasterArray)
+ # Can also use: richnessArray + rasterArray
+ del rasterArray, _in_raster, raster
+
+ arcpy.AddMessage(f"\t\tCreating Core Species Richness Raster for year: {year}")
+
+ # Cast array as float32
+ richnessArray = richnessArray.astype('float32')
+
+ # Convert Array to Raster
+ with arcpy.EnvManager(scratchWorkspace=core_species_richness_scratch_path, workspace = core_species_richness_path):
+ richnessArrayRaster = arcpy.NumPyArrayToRaster(richnessArray, lowerLeft, cell_size, cell_size, -3.40282346639e+38) #-3.40282346639e+38
+ richnessArrayRaster.save(layercode_year_richness)
+ del richnessArrayRaster
+ # Add statistics
+ arcpy.management.CalculateStatistics(layercode_year_richness)
+
+ raster_md = md.Metadata(layercode_year_richness)
+ raster_md.title = os.path.basename(layercode_year_richness).replace("_", " ")
+ raster_md.save()
+ del raster_md
+
+ del richnessArray, rasters
+ #else:
+ # arcpy.AddMessage(f"\t\t{os.path.basename(layercode_year_richness)} exists")
+
+ del year, layercode_year_richness
+ del years, core_species_richness_path, core_species_richness_scratch_path
+
+ # End of business logic for the worker function
+ arcpy.AddMessage(f"Processing for: {table_name} complete")
+
+ # Clean up
+ # Declared Variables for this function only
+ del rowCount, columnCount, lowerLeft, input_rasters
+ del cell_size
+ del region_raster_mask
+
+ # Declared Variables for this function only
+ del datasetcode
+ # Basic variables
+ del table_name, project_folder, scratch_workspace
+ # Imports
+ del md, dismap_tools, np
+ # Function parameter
+ del region_gdb
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def script_tool(project_gdb=""):
+ try:
+ # Imports
+ import dismap_tools
+ from create_species_richness_rasters_director import preprocessing
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+ # Clear Scratch Folder
+ ClearScratchFolder = False
+ if ClearScratchFolder:
+ dismap_tools.clear_folder(folder=rf"{os.path.dirname(project_gdb)}\Scratch")
+ else:
+ pass
+ del ClearScratchFolder
+
+ ## # Set worker parameters
+ ## #table_name = "AI_IDW"
+ ## table_name = "HI_IDW"
+ ## #table_name = "NBS_IDW"
+ ## #table_name = "ENBS_IDW"
+
+ table_names = ["HI_IDW"]
+
+ preprocessing(project_gdb=project_gdb, table_names=table_names, clear_folder=True)
+
+ for table_name in table_names:
+ region_gdb = rf"{os.path.dirname(project_gdb)}\Scratch\{table_name}.gdb"
+ try:
+ pass
+ worker(region_gdb=region_gdb)
+ except SystemExit:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ sys.exit()
+
+ del table_name, region_gdb
+
+ del table_names
+
+ # Declared Varaiables
+ # Imports
+ del dismap_tools, preprocessing
+ # Function Parameters
+ del project_gdb
+
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ hours, rem = divmod(end_time-start_time, 3600)
+ minutes, seconds = divmod(rem, 60)
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"End Time: {strftime('%a %b %d %I:%M %p', localtime(end_time))}")
+ arcpy.AddMessage(f"Elapsed Time {int(hours):0>2}:{int(minutes):0>2}:{seconds:05.2f} (H:M:S)")
+ arcpy.AddMessage(f"{'-' * 80}")
+ del hours, rem, minutes, seconds
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == '__main__':
+ try:
+ project_gdb = arcpy.GetParameterAsText(0)
+ if not project_gdb:
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
+ else:
+ pass
+ script_tool(project_gdb)
+ arcpy.SetParameterAsText(1, "Result")
+ del project_gdb
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_species_year_image_name_table_director.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_species_year_image_name_table_director.py
new file mode 100644
index 0000000..2263419
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_species_year_image_name_table_director.py
@@ -0,0 +1,516 @@
+# -*- coding: utf-8 -*-
+#-------------------------------------------------------------------------------
+# Name: create_species_year_image_name_table_director
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 09/03/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+#-------------------------------------------------------------------------------
+import os
+import sys
+import traceback
+
+import arcpy # third-parties second
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def preprocessing(project_gdb="", table_names="", clear_folder=True):
+ try:
+ import dismap_tools
+
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic arcpy.env variables
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ # Set varaibales
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{project_folder}\Scratch"
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
+
+ # Clear Scratch Folder
+ #ClearScratchFolder = True
+ #if ClearScratchFolder:
+ if clear_folder:
+ dismap_tools.clear_folder(folder=scratch_folder)
+ else:
+ pass
+ #del ClearScratchFolder
+ del clear_folder
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ del project_folder, scratch_workspace
+
+ if not table_names:
+ table_names = [row[0] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"),
+ "TableName",
+ where_clause = "TableName LIKE '%_IDW'")]
+ else:
+ pass
+
+ for table_name in table_names:
+ arcpy.AddMessage(f"Pre-Processing: {table_name}")
+
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ region_scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
+
+ # Create Scratch Workspace for Region
+ if not arcpy.Exists(region_scratch_workspace):
+ os.makedirs(os.path.join(scratch_folder, table_name))
+ if not arcpy.Exists(region_scratch_workspace):
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, f"{table_name}"), "scratch")
+ del region_scratch_workspace
+
+ arcpy.AddMessage(f"Creating File GDB: {table_name}")
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ # Process: Make Table View (Make Table View) (management)
+ datasets = rf'{project_gdb}\Datasets'
+ arcpy.AddMessage(f"\t{os.path.basename(datasets)} has {arcpy.management.GetCount(datasets)[0]} records")
+
+ table_name_view = "Dataset Table View"
+ arcpy.management.MakeTableView(in_table = datasets,
+ out_view = table_name_view,
+ where_clause = f"TableName = '{table_name}'"
+ )
+ arcpy.AddMessage(f"\tThe table {table_name_view} has {arcpy.management.GetCount(table_name_view)[0]} records")
+ arcpy.management.CopyRows(table_name_view, rf"{region_gdb}\Datasets")
+ arcpy.AddMessage("\tCopy Rows: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ filter_region = [row[0] for row in arcpy.da.SearchCursor(rf"{region_gdb}\Datasets", "FilterRegion")][0].replace("'", "''")
+ filter_subregion = [row[0] for row in arcpy.da.SearchCursor(rf"{region_gdb}\Datasets", "FilterSubRegion")][0].replace("'", "''")
+
+ arcpy.management.Delete(table_name_view)
+ del table_name_view
+
+ region_table = rf"{project_gdb}\{table_name}"
+ arcpy.AddMessage(f"\t{os.path.basename(region_table)} has {arcpy.management.GetCount(region_table)[0]} records")
+ # Process: Make Table View (Make Table View) (management)
+ table_name_view = "IDW Table View"
+ arcpy.management.MakeTableView(in_table = region_table,
+ out_view = table_name_view,
+ where_clause = "DistributionProjectName = 'NMFS/Rutgers IDW Interpolation'"
+ )
+ # Process: Copy Rows (Copy Rows) (management)
+ arcpy.AddMessage(f"\t{table_name_view} has {arcpy.management.GetCount(table_name_view)[0]} records")
+ arcpy.management.CopyRows(in_rows = table_name_view, out_table = rf"{region_gdb}\{table_name}")
+ arcpy.AddMessage("\tCopy Rows: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.management.Delete(table_name_view)
+ del table_name_view
+
+ # Process: Make Table View (Make Table View) (management)
+ #arcpy.AddMessage(filter_subregion)
+ species_filter = rf"{project_gdb}\Species_Filter"
+ arcpy.AddMessage(f"\t{os.path.basename(species_filter)} has {arcpy.management.GetCount(species_filter)[0]} records")
+ table_name_view = "Species Filter Table View"
+ arcpy.management.MakeTableView(in_table = species_filter,
+ out_view = table_name_view,
+ #where_clause = f"FilterSubRegion = '{filter_subregion}'",
+ where_clause = f"FilterSubRegion = '{filter_subregion}' AND DistributionProjectName = 'NMFS/Rutgers IDW Interpolation'",
+ workspace=region_gdb,
+ field_info="OBJECTID OBJECTID VISIBLE NONE;Species Species VISIBLE NONE;CommonName CommonName VISIBLE NONE;TaxonomicGroup TaxonomicGroup VISIBLE NONE;FilterRegion FilterRegion VISIBLE NONE;FilterSubRegion FilterSubRegion VISIBLE NONE;ManagementBody ManagementBody VISIBLE NONE;ManagementPlan ManagementPlan VISIBLE NONE;DistributionProjectName DistributionProjectName VISIBLE NONE"
+ )
+
+ arcpy.AddMessage(f"\t{table_name_view} has {arcpy.management.GetCount(table_name_view)[0]} records")
+ arcpy.management.CopyRows(in_rows = table_name_view, out_table = rf"{region_gdb}\Species_Filter")
+ arcpy.AddMessage("\tCopy Rows: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.management.Delete(table_name_view)
+ del table_name_view
+ #print(filter_region, filter_subregion)
+ #
+ del region_table, species_filter
+ del datasets, filter_region, filter_subregion
+ # Leave so we can block the above code
+ # Declared Variables
+ del table_name
+
+ # Declared Variables
+ del scratch_folder, region_gdb
+ # Imports
+ del dismap_tools
+ # Function Parameters
+ del project_gdb, table_names
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ sys.exit()
+ return False
+ else:
+ return True
+
+def director(project_gdb="", Sequential=True, table_names=[]):
+ try:
+ # Test if passed workspace exists, if not sys.exit()
+ if not arcpy.Exists(project_gdb):
+ sys.exit()(f"{os.path.basename(project_gdb)} is missing!!")
+
+ import dismap_tools
+ from create_species_year_image_name_table_worker import worker
+
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = os.path.join(project_folder, "Scratch")
+ del project_folder
+
+ #scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
+ #csv_data_folder = rf"{project_folder}\CSV_Data"
+ #arcpy.env.workspace = project_gdb
+ #arcpy.env.scratchWorkspace = scratch_workspace
+ #del project_folder, scratch_workspace
+
+ preprocessing(project_gdb=project_gdb, table_names=table_names, clear_folder=True)
+
+ # Sequential Processing
+ if Sequential:
+ arcpy.AddMessage("Sequential Processing")
+ for i in range(0, len(table_names)):
+ arcpy.AddMessage(f"Processing: {table_names[i]}")
+ table_name = table_names[i]
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ try:
+ worker(region_gdb=region_gdb)
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ del region_gdb, table_name
+ del i
+ else:
+ pass
+
+ # Non-Sequential Processing
+ if not Sequential:
+ arcpy.AddMessage("Non-Sequential Processing")
+ # Imports
+ import multiprocessing
+ from time import time, localtime, strftime, sleep, gmtime
+ arcpy.AddMessage("Start multiprocessing using the ArcGIS Pro pythonw.exe.")
+ #Set multiprocessing exe in case we're running as an embedded process, i.e ArcGIS
+ #get_install_path() uses a registry query to figure out 64bit python exe if available
+ multiprocessing.set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
+ # Get CPU count and then take 2 away for other process
+ _processes = multiprocessing.cpu_count() - 2
+ _processes = _processes if len(table_names) >= _processes else len(table_names)
+ arcpy.AddMessage(f"Creating the multiprocessing Pool with {_processes} processes")
+ #Create a pool of workers, keep one cpu free for surfing the net.
+ #Let each worker process only handle 1 task before being restarted (in case of nasty memory leaks)
+ with multiprocessing.Pool(processes=_processes, maxtasksperchild=1) as pool:
+ arcpy.AddMessage("\tPrepare arguments for processing")
+ # Use apply_async so we can handle exceptions gracefully
+ jobs={}
+ for i in range(0, len(table_names)):
+ try:
+ arcpy.AddMessage(f"Processing: {table_names[i]}")
+ table_name = table_names[i]
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ jobs[table_name] = pool.apply_async(worker, [region_gdb])
+ del table_name, region_gdb
+ except: # noqa: E722
+ pool.terminate()
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ del i
+ all_finished = False
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ result_completed = {}
+ while True:
+ all_finished = True
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ arcpy.AddMessage(f"\nStart Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage("Have the workers finished?")
+ arcpy.AddMessage("Have the workers finished?")
+ finish_time = strftime('%a %b %d %I:%M %p', localtime())
+ time_elapsed = u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time)))
+ arcpy.AddMessage(f"It's {finish_time}\n{time_elapsed}")
+ finish_time = f"{finish_time}.\n\t{time_elapsed}"
+ del time_elapsed
+ for table_name, result in jobs.items():
+ if result.ready():
+ if table_name not in result_completed:
+ result_completed[table_name] = finish_time
+ try:
+ # wait for and get the result from the task
+ result.get()
+ except SystemExit:
+ pool.terminate()
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ pass
+ arcpy.AddMessage(f"Process {table_name}\n\tFinished on {result_completed[table_name]}")
+ else:
+ all_finished = False
+ arcpy.AddMessage(f"Process {table_name} is running. . .")
+ del table_name, result
+ del elapse_time, end_time, finish_time
+ if all_finished:
+ break
+ sleep(_processes * 7.5)
+ del result_completed
+ del start_time
+ del all_finished
+ arcpy.AddMessage("\tClose the process pool")
+ # close the process pool
+ pool.close()
+ # wait for all tasks to complete and processes to close
+ arcpy.AddMessage("\tWait for all tasks to complete and processes to close")
+ pool.join()
+ # Just in case
+ pool.terminate()
+ del pool
+ del jobs
+ del _processes
+ del time, multiprocessing, localtime, strftime, sleep, gmtime
+ arcpy.AddMessage("\tDone with multiprocessing Pool")
+
+ # Post-Processing
+ arcpy.AddMessage("Post-Processing Begins")
+ arcpy.AddMessage("Processing Results")
+ datasets = list()
+ walk = arcpy.da.Walk(scratch_folder, datatype=["Table", "FeatureClass"])
+ for dirpath, dirnames, filenames in walk:
+ for filename in filenames:
+ if filename.endswith("LayerSpeciesYearImageName"):
+ datasets.append(os.path.join(dirpath, filename))
+ else:
+ pass
+ del filename
+ del dirpath, dirnames, filenames
+ del walk
+ for dataset in datasets:
+ datasets_short_path = f".. {'/'.join(dataset.split(os.sep)[-4:])}"
+ dataset_name = os.path.basename(dataset)
+ region_gdb = os.path.dirname(dataset)
+ arcpy.AddMessage(f"\tDataset: '{dataset_name}'")
+ arcpy.AddMessage(f"\t\tPath: '{datasets_short_path}'")
+ arcpy.AddMessage(f"\t\tRegion GDB: '{os.path.basename(region_gdb)}'")
+ arcpy.AddMessage(f"\tCopying the {dataset_name} Table to the project GDB Table")
+ arcpy.management.Copy(rf"{region_gdb}\{dataset_name}", rf"{project_gdb}\{dataset_name}")
+ arcpy.AddMessage("\tCopy: {0} {1}\n".format(dataset_name, arcpy.GetMessages(0).replace("\n", '\n\t')))
+
+ arcpy.AddMessage("\t\tUpdating field values to replace None with empty string")
+ fields = [f.name for f in arcpy.ListFields(rf"{project_gdb}\{dataset_name}") if f.type == "String"]
+ # Create update cursor for feature class
+ with arcpy.da.UpdateCursor(rf"{project_gdb}\{dataset_name}", fields) as cursor:
+ for row in cursor:
+ #arcpy.AddMessage(row)
+ for field_value in row:
+ #arcpy.AddMessage(field_value)
+ if field_value is None:
+ row[row.index(field_value)] = ""
+ cursor.updateRow(row)
+ del field_value
+ del row
+ del fields, cursor
+
+ del region_gdb, dataset_name, datasets_short_path
+ del dataset
+ del datasets
+
+ arcpy.AddMessage(f"Compacting the {os.path.basename(project_gdb)} GDB")
+ arcpy.management.Compact(project_gdb)
+ arcpy.AddMessage("\t"+arcpy.GetMessages(0).replace("\n", "\n\t"))
+
+ # Declared Variables assigned in function
+ del scratch_folder
+ # Imports
+ del dismap_tools, worker
+ # Function Parameters
+ del project_gdb, Sequential, table_names
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def script_tool(project_gdb=""):
+ try:
+ # Imports
+ import dismap_tools
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+ # Set varaibales
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{os.path.dirname(project_gdb)}\Scratch"
+ del project_folder
+
+ # Clear Scratch Folder
+ ClearScratchFolder = False
+ if ClearScratchFolder:
+ #if clear_folder:
+ _scratch_folder = rf"{os.path.dirname(project_gdb)}\Scratch"
+ dismap_tools.clear_folder(folder=_scratch_folder)
+ del _scratch_folder
+ else:
+ pass
+ del ClearScratchFolder
+ #del clear_folder
+
+ # Create project scratch workspace, if missing
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ if not arcpy.Exists(scratch_folder):
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", "scratch")
+ del scratch_folder
+
+ # Set basic arcpy.env variables
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ try:
+ # table_names = ["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW", "NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",]
+ Test = False
+ if Test:
+ director(project_gdb=project_gdb, Sequential=True, table_names=["GMEX_IDW", "HI_IDW", "WC_ANN_IDW", "WC_TRI_IDW"])
+ #director(project_gdb=project_gdb, Sequential=False, table_names=["SEUS_SPR_IDW", "HI_IDW"])
+ elif not Test:
+ director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW",])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",])
+ else:
+ pass
+ del Test
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+
+ # Clear Scratch Folder
+ ClearScratchFolder = False
+ if ClearScratchFolder:
+ #if clear_folder:
+ _scratch_folder = rf"{os.path.dirname(project_gdb)}\Scratch"
+ dismap_tools.clear_folder(folder=_scratch_folder)
+ del _scratch_folder
+ else:
+ pass
+ del ClearScratchFolder
+ #del clear_folder
+
+ # Declared Variables
+ del dismap_tools
+ # Function Parameters
+ del project_gdb
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ hours, rem = divmod(end_time-start_time, 3600)
+ minutes, seconds = divmod(rem, 60)
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"End Time: {strftime('%a %b %d %I:%M %p', localtime(end_time))}")
+ arcpy.AddMessage(f"Elapsed Time {int(hours):0>2}:{int(minutes):0>2}:{seconds:05.2f} (H:M:S)")
+ arcpy.AddMessage(f"{'-' * 80}")
+ del hours, rem, minutes, seconds
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ sys.exit()
+ return False
+ else:
+ return True
+
+if __name__ == '__main__':
+ try:
+ project_gdb = arcpy.GetParameterAsText(0)
+ if not project_gdb:
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
+ else:
+ pass
+
+ script_tool(project_gdb)
+
+ arcpy.SetParameterAsText(1, "Result")
+ del project_gdb
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_species_year_image_name_table_worker.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_species_year_image_name_table_worker.py
new file mode 100644
index 0000000..ac4f374
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/create_species_year_image_name_table_worker.py
@@ -0,0 +1,559 @@
+# -*- coding: utf-8 -*-
+#-------------------------------------------------------------------------------
+# Name: create_species_year_image_name_table_worker
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 09/03/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+#-------------------------------------------------------------------------------
+import os
+import sys
+import traceback
+import inspect
+
+import arcpy # third-parties second
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def print_table(table=""):
+ try:
+ """ Print first 5 rows of a table """
+ desc = arcpy.da.Describe(table)
+ fields = [f.name for f in desc["fields"] if f.type == "String"]
+ #Get OID field
+ oid = desc["OIDFieldName"]
+ # Use SQL TOP to sort field values
+ arcpy.AddMessage(f"{', '.join(fields)}")
+ for row in arcpy.da.SearchCursor(table, fields, f"{oid} <= 5"):
+ arcpy.AddMessage(row)
+ del row
+ del desc, fields, oid
+ del table
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+
+def worker(region_gdb=""):
+ try:
+## # Test if passed workspace exists, if not sys.exit()
+## if not arcpy.Exists(rf"{region_gdb}"):
+## arcpy.AddError(f"{os.path.basename(region_gdb)} is missing!!")
+## sys.exit()
+
+ from arcpy import metadata as md
+ # Import the dismap module to access tools
+ import dismap_tools
+
+ # Set History and Metadata logs, set serverity and message level
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic workkpace variables
+ table_name = os.path.basename(region_gdb).replace(".gdb", "")
+ scratch_folder = os.path.dirname(region_gdb)
+ project_folder = os.path.dirname(scratch_folder)
+ scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
+ region_table = rf"{region_gdb}\{table_name}"
+ csv_data_folder = rf"{project_folder}\CSV_Data"
+
+ arcpy.AddMessage(f"Table Name: {table_name}\nProject Folder: {project_folder}\nScratch Folder: {scratch_folder}\n")
+ # Set basic workkpace variables
+ del project_folder, scratch_folder
+
+ arcpy.env.workspace = region_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ # **********************************************************************
+ # Start: Create new LayerSpeciesYearImageName table
+ layer_species_year_image_name = rf"{region_gdb}\{table_name}_LayerSpeciesYearImageName"
+
+ arcpy.AddMessage(f"Create Table: {table_name}_LayerSpeciesYearImageName" )
+ arcpy.management.CreateTable(out_path = region_gdb,
+ out_name = f"{table_name}_LayerSpeciesYearImageName",
+ template = "",
+ config_keyword = "",
+ out_alias = "")
+ arcpy.AddMessage("\tCreate Table: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ # Add Fields to layer_species_year_image_name
+ dismap_tools.add_fields(csv_data_folder, layer_species_year_image_name)
+ dismap_tools.import_metadata(csv_data_folder, layer_species_year_image_name)
+ # End
+ # **********************************************************************
+
+ # **********************************************************************
+ # Start: Create new LayerSpeciesYearImageName table
+ arcpy.AddMessage("\nDatasets Table\n" )
+ datasets_table = rf"{region_gdb}\\Datasets"
+ datasets_table_fields = [f.name for f in arcpy.ListFields(datasets_table) if f.type not in ['Geometry', 'OID']]
+ print_table(datasets_table)
+ #region = [row[0] for row in arcpy.da.SearchCursor(datasets_table, "Region", where_clause = f"TableName = '{table_name}'")][0]
+ filter_region = [row[0] for row in arcpy.da.SearchCursor(datasets_table, "FilterRegion", where_clause = f"TableName = '{table_name}'")][0].replace("'", "''")
+ filter_subregion = [row[0] for row in arcpy.da.SearchCursor(datasets_table, "FilterSubRegion", where_clause = f"TableName = '{table_name}'")][0].replace("'", "''")
+ # End
+ # **********************************************************************
+
+ # **********************************************************************
+ # Start: Create new LayerSpeciesYearImageName table
+ arcpy.AddMessage("\nRegion IDW Table\n" )
+ region_table_fields = [f.name for f in arcpy.ListFields(region_table) if f.type not in ['Geometry', 'OID']]
+ # Get a record count to see if data is present; we don't want to add data
+ getcount = arcpy.management.GetCount(region_table)[0]
+ arcpy.AddMessage(f"\t{os.path.basename(region_table)} has {getcount} records")
+ del getcount
+ print_table(region_table)
+ arcpy.AddMessage(f"\n\tUnique Species Count: {len(dismap_tools.unique_values(table=region_table, field='Species'))} for {os.path.basename(region_table)}")
+ # End
+ # **********************************************************************
+
+ # **********************************************************************
+ # Start: Create new LayerSpeciesYearImageName table
+ arcpy.AddMessage("\nImage Name Table\n" )
+ layer_species_year_image_name_fields = [f.name for f in arcpy.ListFields(layer_species_year_image_name) if f.type not in ['Geometry', 'OID']]
+ arcpy.AddMessage(f"Image Name Fields:\n\t{', '.join(layer_species_year_image_name_fields)}")
+ print_table(layer_species_year_image_name)
+ # End
+ # **********************************************************************
+
+ # **********************************************************************
+ # Start: Get information from the species filter table to create a
+ # species filter dictionary
+ arcpy.AddMessage("\nCreating the Species_Filter dictionary\n" )
+
+ species_filter_table = os.path.join(region_gdb, "Species_Filter")
+ species_filter_table_fields = [f.name for f in arcpy.ListFields(species_filter_table) if f.type not in ['Geometry', 'OID']]
+ # Get a record count to see if data is present; we don't want to add data
+ getcount = arcpy.management.GetCount(species_filter_table)[0]
+ arcpy.AddMessage(f"\t{os.path.basename(species_filter_table)} has {getcount} records")
+ arcpy.AddMessage(f"\tUnique Species Count: {len(dismap_tools.unique_values(table=species_filter_table, field='Species'))} for {os.path.basename(species_filter_table)}")
+ del getcount
+ #arcpy.AddMessage(species_filter_table_fields)
+ print_table(species_filter_table)
+ # Species, CommonName, TaxonomicGroup, FilterRegion, FilterSubRegion, ManagementBody, ManagementPlan, DistributionProjectName
+ species_filter = {}
+ with arcpy.da.SearchCursor(species_filter_table, species_filter_table_fields, f"FilterSubRegion = '{filter_subregion}'") as cursor:
+ for row in cursor:
+ #arcpy.AddMessage(row)
+ species_filter[row[0]] = [row[1],row[2],row[3],row[4],row[5],row[6],row[7]]
+ del row
+ del cursor, species_filter_table, species_filter_table_fields
+ # End
+ # **********************************************************************
+#Datasets
+# table_name
+# DatasetCode, TableName, Region, Season, DistributionProjectName, SummaryProduct, FilterRegion, FilterSubRegion
+# ['AI', 'AI_IDW', 'Aleutian Islands', '', 'NMFS/Rutgers IDW Interpolation', 'Yes', 'Alaska', 'Aleutian Islands'
+
+# IDW Table
+# DatasetCode, Region, Season, DistributionProjectName, SummaryProduct, SampleID, Species, TransformUnit, CommonName, SpeciesCommonName, CommonNameSpecies, CoreSpecies, Stratum
+# ['AI', 'Aleutian Islands', '', 'NMFS/Rutgers IDW Interpolation', 'Yes', '-10850', 'Anoplopoma fimbria', 'cuberoot', 'Sablefish', 'Anoplopoma fimbria (Sablefish)', 'Sablefish (Anoplopoma fimbria)', 'Yes', '721']
+
+# Species Filter
+# Species, CommonName, TaxonomicGroup, FilterRegion, FilterSubRegion, ManagementBody, ManagementPlan, DistributionProjectName
+# ['Anoplopoma fimbria', 'Sablefish', 'Perciformes/Cottoidei (sculpins)', 'Alaska', 'Aleutian Islands', 'NPFMC', 'Groundfish of the Bering Sea and Aleutian Islands Management Area', 'NMFS/Rutgers IDW Interpolation']
+
+# Image Name Table
+# DatasetCode, Region, Season, SummaryProduct, FilterRegion, FilterSubRegion, Species, CommonName, SpeciesCommonName, CommonNameSpecies, TaxonomicGroup, ManagementBody, ManagementPlan, DistributionProjectName, CoreSpecies, Variable, Value, Dimensions, ImageName
+
+ arcpy.AddMessage("\nDefining the case fields\n")
+
+ case_fields = [f for f in layer_species_year_image_name_fields if f in region_table_fields]
+ arcpy.AddMessage(f"Case Fields:\n\t{', '.join(case_fields)}")
+
+ # Execute Statistics to get unique set of records
+ table_name_tmp = table_name+"_tmp"
+ stats_fields = [[f"{f}", "COUNT"] for f in case_fields]
+
+ arcpy.AddMessage(f"Statistics Analysis of {table_name} Table")
+ arcpy.analysis.Statistics(table_name, table_name_tmp, stats_fields, case_fields)
+ arcpy.AddMessage("\tStatistics Analysis: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ del stats_fields, case_fields
+
+ table_name_tmp_fields = [f.name for f in arcpy.ListFields(table_name_tmp) if f.type not in ['Geometry', 'OID']]
+
+ table_name_tmp_drop_fields = ";".join([f for f in table_name_tmp_fields if "FREQUENCY" in f or "COUNT" in f])
+ del table_name_tmp_fields
+
+ arcpy.management.DeleteField(in_table=table_name_tmp, drop_field=table_name_tmp_drop_fields)
+ del table_name_tmp_drop_fields
+
+ # Get a record count to see if data is present; we don't want to add data
+ getcount = arcpy.management.GetCount(table_name_tmp)[0]
+ arcpy.AddMessage(f"\t{table_name_tmp} has {getcount} records")
+ del getcount
+
+ arcpy.AddMessage(f"\tAdding the Variable, Dimensions, and ImageName Fields to the {table_name_tmp} table")
+
+ table_name_tmp_new_fields = ['FilterRegion', 'FilterSubRegion', 'TaxonomicGroup',
+ 'ManagementBody', 'ManagementPlan', 'DistributionProjectName',
+ 'Variable', 'Value', 'Dimensions', 'ImageName',]
+
+ table_name_tmp_fields = [f.name for f in arcpy.ListFields(table_name_tmp) if f.type not in ['Geometry', 'OID']]
+ arcpy.AddMessage(table_name_tmp_fields)
+
+ table_name_tmp_new_fields = [f for f in table_name_tmp_new_fields if f not in table_name_tmp_fields]
+ arcpy.AddMessage(table_name_tmp_new_fields)
+
+ field_definitions = dismap_tools.field_definitions(csv_data_folder, "")
+
+ field_definition_list = []
+ for table_name_tmp_new_field in table_name_tmp_new_fields:
+ #arcpy.AddMessage(table_name_tmp_new_field)
+ field_definition_list.append([field_definitions[table_name_tmp_new_field]["field_name"],
+ field_definitions[table_name_tmp_new_field]["field_type"],
+ field_definitions[table_name_tmp_new_field]["field_aliasName"],
+ field_definitions[table_name_tmp_new_field]["field_length"]])
+ del table_name_tmp_new_field
+ del field_definitions
+
+ arcpy.AddMessage(f"Adding Fields to Table: {table_name}_tmp")
+ arcpy.management.AddFields(in_table = table_name_tmp, field_description = field_definition_list, template="")
+ arcpy.AddMessage("\t{0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ del field_definition_list
+
+ del table_name_tmp_new_fields, table_name_tmp_fields
+
+ # The following calculates the time stamp for the Dataset
+ # Use Update Cursor instead of Calculate Field
+ fields = ["DatasetCode", "Region", "Species", "Year", "FilterRegion",
+ "FilterSubRegion", "TaxonomicGroup", "ManagementBody",
+ "ManagementPlan", "DistributionProjectName", "Variable",
+ "Value", "Dimensions", "ImageName"]
+
+ with arcpy.da.UpdateCursor(table_name_tmp, fields) as cursor:
+ for row in cursor:
+ variable = row[2].replace("(","").replace(")","").replace(".","")
+ value = "Species"
+ dimensions = "StdTime"
+ imagename = f"{table_name}_{variable.replace(' ','_')}_{str(row[3])}"
+ if row[2] in species_filter:
+ row[4] = filter_region.replace("''", "'")
+ row[5] = filter_subregion.replace("''", "'")
+ row[6] = species_filter[row[2]][1] # TaxonomicGroup
+ row[7] = species_filter[row[2]][4] # ManagementBody
+ row[8] = species_filter[row[2]][5] # ManagementPlan
+ row[9] = species_filter[row[2]][6] # DistributionProjectName
+ else:
+ row[4] = ""
+ row[5] = ""
+ row[6] = ""
+ row[7] = ""
+ row[8] = ""
+ row[9] = ""
+ row[10] = variable
+ row[11] = value
+ row[12] = dimensions
+ row[13] = imagename
+ cursor.updateRow(row)
+ del row, variable, value, dimensions, imagename
+ del cursor
+ del fields
+ del species_filter
+
+ arcpy.management.Append(inputs = table_name_tmp, target = layer_species_year_image_name, schema_type="NO_TEST", field_mapping="", subtype="")
+ arcpy.AddMessage("\tAppend: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ table_name_tmp_fields = [f.name for f in arcpy.ListFields(table_name_tmp) if f.type not in ['Geometry', 'OID']]
+
+ case_fields = [f.name for f in arcpy.ListFields(table_name_tmp) if f.type not in ['Geometry', 'OID'] and f.name not in ["CoreSpecies", "Species", "CommonName", "SpeciesCommonName", "CommonNameSpecies", "TaxonomicGroup", "ManagementBody", "ManagementPlan", "Variable", "Value", "Dimensions", "ImageName"]]
+
+ # Execute Statistics to get unique set of records
+ table_name_tmp_stats = table_name_tmp+"_stats"
+
+ stats_fields = [[f"{f}", "COUNT"] for f in case_fields]
+
+ arcpy.AddMessage(f"\tStatistics Analysis of '{table_name}_tmp' Table")
+
+ arcpy.analysis.Statistics(table_name_tmp, table_name_tmp_stats, stats_fields, case_fields)
+ arcpy.AddMessage("\tStatistics Analysis: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del stats_fields, case_fields
+
+ fields = [f.name for f in arcpy.ListFields(table_name_tmp_stats) if f.type not in ['Geometry', 'OID']]
+
+ drop_fields = ";".join([f for f in fields if "FREQUENCY" in f or "COUNT" in f])
+ del fields
+
+ arcpy.management.DeleteField(in_table=table_name_tmp_stats, drop_field=drop_fields)
+ arcpy.AddMessage("\tDelete Field: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ del drop_fields
+
+ # Get a record count to see if data is present; we don't want to add data
+ getcount = arcpy.management.GetCount(table_name_tmp_stats)[0]
+ arcpy.AddMessage(f'\t\t> {os.path.basename(table_name_tmp_stats)} has {getcount} records')
+ del getcount
+
+ arcpy.AddMessage(f'\t\t> Add Variable, Dimensions, and ImageName \n\t\t> Fields to {os.path.basename(table_name_tmp_stats)} table\n')
+
+ table_name_tmp_new_fields = ['CoreSpecies', 'Variable', 'Value', 'Dimensions', 'ImageName',]
+
+ tb_fields = [f.name for f in arcpy.ListFields(table_name_tmp_stats) if f.type not in ['Geometry', 'OID']]
+
+ table_name_tmp_new_fields = [f for f in table_name_tmp_new_fields if f not in tb_fields]
+ del tb_fields
+
+ field_definitions = dismap_tools.field_definitions(csv_data_folder, "")
+
+ field_definition_list = []
+ for table_name_tmp_new_field in table_name_tmp_new_fields:
+ field_definition_list.append([field_definitions[table_name_tmp_new_field]["field_name"],
+ field_definitions[table_name_tmp_new_field]["field_type"],
+ field_definitions[table_name_tmp_new_field]["field_aliasName"],
+ field_definitions[table_name_tmp_new_field]["field_length"]])
+ del table_name_tmp_new_field
+ del field_definitions
+ del table_name_tmp_new_fields
+
+ arcpy.AddMessage(f"Adding Fields to Table: {table_name}_tmp")
+ arcpy.management.AddFields(in_table = table_name_tmp_stats, field_description = field_definition_list, template="")
+ arcpy.AddMessage("\t{0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ del field_definition_list
+
+ fields = [f.name for f in arcpy.ListFields(table_name_tmp_stats) if f.type not in ['Geometry', 'OID']]
+
+ arcpy.AddMessage(fields)
+ # ['DatasetCode', 'Region', 'Season', 'SummaryProduct', 'DistributionProjectName',
+ # 'Year', 'StdTime', 'FilterRegion', 'FilterSubRegion', 'CoreSpecies', 'Variable',
+ # 'Value', 'Dimensions', 'ImageName']
+
+ with arcpy.da.UpdateCursor(table_name_tmp_stats, fields) as cursor:
+ for row in cursor:
+ variable = "Core Species Richness"
+ value = "Core Species Richness"
+ dimensions = "StdTime"
+ imagename = f"{table_name}_{variable.replace(' ','_')}_{str(row[5])}"
+ row[7] = filter_region.replace("''", "'")
+ row[8] = filter_subregion.replace("''", "'")
+ row[9] = "Yes"
+ row[10] = variable
+ row[11] = value
+ row[12] = dimensions
+ row[13] = imagename
+ arcpy.AddMessage(row)
+ cursor.updateRow(row)
+ del row, variable, value, dimensions, imagename
+ del cursor
+
+ arcpy.management.Append(inputs = table_name_tmp_stats, target = layer_species_year_image_name, schema_type="NO_TEST", field_mapping="", subtype="")
+
+ # The following calculates the time stamp for the Dataset
+ # Use Update Cursor instead of Calculate Field
+
+ with arcpy.da.UpdateCursor(table_name_tmp_stats, fields) as cursor:
+ for row in cursor:
+ # #variable = "Core Species Richness" if row[8] == "Yes" else "Species Richness"
+ variable = "Species Richness"
+ value = "Species Richness"
+ dimensions = "StdTime"
+ imagename = f"{table_name}_{variable.replace(' ','_')}_{str(row[5])}"
+ row[7] = filter_region.replace("''", "'")
+ row[8] = filter_subregion.replace("''", "'")
+ row[9] = "No"
+ row[10] = variable
+ row[11] = value
+ row[12] = dimensions
+ row[13] = imagename
+ arcpy.AddMessage(row)
+ cursor.updateRow(row)
+ del row, variable, value, dimensions, imagename
+ del fields, cursor
+
+ arcpy.management.Append(inputs = table_name_tmp_stats, target = layer_species_year_image_name, schema_type="NO_TEST", field_mapping="", subtype="")
+
+ del table_name_tmp_stats, table_name_tmp_fields
+
+ del table_name_tmp
+
+ # Alter Field Names to layer_species_year_image_name
+ dismap_tools.alter_fields(csv_data_folder, layer_species_year_image_name)
+
+ dataset_md = md.Metadata(layer_species_year_image_name)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ del dataset_md
+
+ # End of business logic for the worker function
+ arcpy.AddMessage(f"Processing for: {table_name} complete")
+
+ arcpy.env.workspace = region_gdb
+ datasets = [ds for ds in arcpy.ListTables("*") if not ds.endswith("LayerSpeciesYearImageName")]
+ for dataset in datasets:
+## arcpy.AddMessage(f"Deleting: '{dataset}'")
+## arcpy.management.Delete(dataset)
+ del dataset
+ del datasets
+
+ # Declared Variables
+ del filter_region, filter_subregion
+ del datasets_table, datasets_table_fields, region_table_fields
+ del layer_species_year_image_name, layer_species_year_image_name_fields
+ del csv_data_folder, region_table, scratch_workspace, table_name
+ # Imports
+ del md, dismap_tools
+ # Function parameter
+ del region_gdb
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except SystemExit as se:
+ arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
+ sys.exit()
+ except Exception as e:
+ arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
+ traceback.print_exc()
+ sys.exit()
+ else: # noqa: E722
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+ finally:
+ pass
+
+def script_tool(project_gdb=""):
+ try:
+ from create_species_year_image_name_table_director import preprocessing
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+ # Set basic arcpy.env variables
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ ## # Set worker parameters
+ ## #table_name = "AI_IDW"
+ ## table_name = "HI_IDW"
+ ## #table_name = "NBS_IDW"
+ ## #table_name = "ENBS_IDW"
+
+ #table_names = ["NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW",]
+ table_names = ["WC_ANN_IDW"]
+
+ preprocessing(project_gdb=project_gdb, table_names=table_names, clear_folder=True)
+
+ # Set varaibales
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = os.path.join(project_folder, "Scratch")
+ del project_folder
+
+ for table_name in table_names:
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+
+ try:
+ pass
+ worker(region_gdb=region_gdb)
+ except SystemExit:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+
+ del table_name, region_gdb
+
+ del table_names
+
+ # Declared Varaiables
+ del scratch_folder
+ # Imports
+
+ # Function Parameters
+ del project_gdb
+
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ hours, rem = divmod(end_time-start_time, 3600)
+ minutes, seconds = divmod(rem, 60)
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"End Time: {strftime('%a %b %d %I:%M %p', localtime(end_time))}")
+ arcpy.AddMessage(f"Elapsed Time {int(hours):0>2}:{int(minutes):0>2}:{seconds:05.2f} (H:M:S)")
+ arcpy.AddMessage(f"{'-' * 80}")
+ del hours, rem, minutes, seconds
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == '__main__':
+ try:
+ project_gdb = arcpy.GetParameterAsText(0)
+ if not project_gdb:
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
+ else:
+ pass
+
+ script_tool(project_gdb)
+
+ arcpy.SetParameterAsText(1, "Result")
+
+ del project_gdb
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/dev_dismap_metadata_processing.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/dev_dismap_metadata_processing.py
new file mode 100644
index 0000000..0c6645b
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/dev_dismap_metadata_processing.py
@@ -0,0 +1,4381 @@
+# -*- coding: utf-8 -*-
+# -------------------------------------------------------------------------------
+# Name: module1
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 03/03/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+# -------------------------------------------------------------------------------
+import os, sys # built-ins first
+import traceback
+import importlib
+import inspect
+
+import arcpy # third-parties second
+
+def new_function():
+ try:
+ pass
+ # Declared Varaiables
+ # Imports
+ # Function Parameters
+ except KeyboardInterrupt:
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def date_code(version):
+ try:
+ from datetime import datetime
+ from time import strftime
+
+ _date_code = ""
+
+ if version.isdigit():
+ # The version value is 'YYYYMMDD' format (20230501)
+ # and is converted to 'Month Day and Year' (i.e. May 1 2023)
+ _date_code = datetime.strptime(version, "%Y%m%d").strftime("%B %#d %Y")
+ elif not version.isdigit():
+ # The version value is 'Month Day and Year' (i.e. May 1 2023)
+ # and is converted to 'YYYYMMDD' format (20230501)
+ _date_code = datetime.strptime(version, "%B %d %Y").strftime("%Y%m%d")
+ else:
+ _date_code = "error"
+ # Imports
+ del datetime, strftime
+ del version
+
+ import copy
+ __results = copy.deepcopy(_date_code)
+ del _date_code, copy
+ except KeyboardInterrupt:
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return __results
+ finally:
+ if "__results" in locals().keys(): del __results
+
+# #
+# Function: unique_years
+# Gets the unique years in a table
+# @param string table: The name of the layer
+# @return array: a sorted year array so we can go in order.
+# #
+def unique_years(table):
+ #print(table)
+ arcpy.management.SelectLayerByAttribute( table, "CLEAR_SELECTION" )
+ arcpy.management.SelectLayerByAttribute( table, "NEW_SELECTION", "Year IS NOT NULL")
+ with arcpy.da.SearchCursor(table, ["Year"]) as cursor:
+ return sorted({row[0] for row in cursor})
+
+def xml_tree_merge(source, target):
+ import copy
+ """Merge two xml trees A and B, so that each recursively found leaf element of B is added to A. If the element
+ already exists in A, it is replaced with B's version. Tree structure is created in A as required to reflect the
+ position of the leaf element in B.
+ Given and , a merge results in
+ (order not guaranteed)
+ """
+ def inner(aparent, bparent):
+ for bchild in bparent:
+ achild = aparent.xpath('./' + bchild.tag)
+ if not achild:
+ aparent.append(bchild)
+ elif bchild.getchildren():
+ inner(achild[0], bchild)
+
+ source_copy = copy.deepcopy(source)
+ inner(source_copy, target)
+ return source_copy
+
+def dataset_title_dict(project_gdb=""):
+ try:
+ if "Scratch" in project_gdb:
+ project = os.path.basename(os.path.dirname(os.path.dirname(project_gdb)))
+ else:
+ project = os.path.basename(os.path.dirname(project_gdb))
+
+ project_folder = os.path.dirname(project_gdb)
+ crf_folder = rf"{project_folder}\CRFs"
+ _credits = "These data were produced by NMFS OST."
+ access_constraints = "***No Warranty*** The user assumes the entire risk related to its use of these data. NMFS is providing these data 'as is' and NMFS disclaims any and all warranties, whether express or implied, including (without limitation) any implied warranties of merchantability or fitness for a particular purpose. No warranty expressed or implied is made regarding the accuracy or utility of the data on any other system or for general or scientific purposes, nor shall the act of distribution constitute any such warranty. It is strongly recommended that careful attention be paid to the contents of the metadata file associated with these data to evaluate dataset limitations, restrictions or intended use. In no event will NMFS be liable to you or to any third party for any direct, indirect, incidental, consequential, special or exemplary damages or lost profit resulting from any use or misuse of these data."
+
+ __datasets_dict = {}
+
+ dataset_codes = {row[0] : [row[1], row[2], row[3], row[4], row[5]] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"), ["DatasetCode", "PointFeatureType", "DistributionProjectCode", "FilterRegion", "FilterSubRegion", "Season"])}
+ for dataset_code in dataset_codes:
+ point_feature_type = dataset_codes[dataset_code][0] if dataset_codes[dataset_code][0] else ""
+ distribution_project_code = dataset_codes[dataset_code][1] if dataset_codes[dataset_code][1] else ""
+ filter_region = dataset_codes[dataset_code][2] if dataset_codes[dataset_code][2] else dataset_code.replace("_", " ")
+ filter_sub_region = dataset_codes[dataset_code][3] if dataset_codes[dataset_code][3] else dataset_code.replace("_", " ")
+ season = dataset_codes[dataset_code][4] if dataset_codes[dataset_code][4] else ""
+
+ tags = f"DisMAP; {filter_region}" if filter_region == filter_sub_region else f"DisMAP; {filter_region}; {filter_sub_region}"
+ tags = f"{tags}; {season}" if season else f"{tags}"
+ tags = f"{tags}; distribution; seasonal distribution; fish; invertebrates; climate change; fishery-independent surveys; ecological dynamics; oceans; biosphere; earth science; species/population interactions; aquatic sciences; fisheries; range changes"
+ summary = "These data were created as part of the DisMAP project to enable visualization and analysis of changes in fish and invertebrate distributions"
+
+ #print(f"Dateset Code: {dataset_code}")
+ if distribution_project_code:
+ if distribution_project_code == "IDW":
+
+ #table_name = f"{dataset_code}_{distribution_project_code}_TABLE"
+ table_name = f"{dataset_code}_{distribution_project_code}"
+ table_name_s = f"{table_name}_{date_code(project)}"
+ table_name_st = f"{filter_sub_region} {season} Table {date_code(project)}".replace(' ',' ')
+
+ #print(f"\tProcessing: {table_name}")
+
+ __datasets_dict[table_name] = {"Dataset Service" : table_name_s,
+ "Dataset Service Title" : table_name_st,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"This table represents the CSV Data files in ArcGIS format",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ del table_name, table_name_s, table_name_st
+
+ table_name = f"{dataset_code}_{distribution_project_code}"
+ sample_locations_fc = f"{table_name}_{point_feature_type.replace(' ', '_')}"
+ sample_locations_fcs = f"{table_name}_{point_feature_type.replace(' ', '_')}_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} {point_feature_type} {date_code(project)}"
+ sample_locations_fcst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ __datasets_dict[sample_locations_fc] = {"Dataset Service" : sample_locations_fcs,
+ "Dataset Service Title" : sample_locations_fcst,
+ "Tags" : tags,
+ "Summary" : f"{summary}. These layers provide information on the spatial extent/boundaries of the bottom trawl surveys. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Description" : f"This survey points layer provides information on both the locations where species are caught in several NOAA Fisheries surveys and the amount (i.e., biomass weight catch per unit effort, standardized to kg/ha) of each species that was caught at each location. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tSample Locations FC: {sample_locations_fc}")
+ #print(f"\tSample Locations FCS: {sample_locations_fcs}")
+ #print(f"\tSample Locations FST: {sample_locations_fcst}")
+
+ del table_name, sample_locations_fc, sample_locations_fcs, sample_locations_fcst
+
+ table_name = f"{dataset_code}"
+ sample_locations_fc = f"{table_name}_{point_feature_type.replace(' ', '_')}"
+ sample_locations_fcs = f"{table_name}_{point_feature_type.replace(' ', '_')}_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} {point_feature_type} {date_code(project)}"
+ sample_locations_fcst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ __datasets_dict[sample_locations_fc] = {"Dataset Service" : sample_locations_fcs,
+ "Dataset Service Title" : sample_locations_fcst,
+ "Tags" : tags,
+ "Summary" : f"{summary}. These layers provide information on the spatial extent/boundaries of the bottom trawl surveys. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Description" : f"This survey points layer provides information on both the locations where species are caught in several NOAA Fisheries surveys and the amount (i.e., biomass weight catch per unit effort, standardized to kg/ha) of each species that was caught at each location. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tSample Locations FC: {sample_locations_fc}")
+ #print(f"\tSample Locations FCS: {sample_locations_fcs}")
+ #print(f"\tSample Locations FST: {sample_locations_fcst}")
+
+ del table_name, sample_locations_fc, sample_locations_fcs, sample_locations_fcst
+
+
+ elif distribution_project_code != "IDW":
+
+ #table_name = f"{dataset_code}_TABLE"
+ table_name = f"{dataset_code}"
+ table_name_s = f"{table_name}_{date_code(project)}"
+ table_name_st = f"{filter_sub_region} {season} Table {date_code(project)}".replace(' ',' ')
+
+ #print(f"\tProcessing: {table_name}")
+
+ __datasets_dict[table_name] = {"Dataset Service" : table_name_s,
+ "Dataset Service Title" : table_name_st,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"This table represents the CSV Data files in ArcGIS format",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ del table_name, table_name_s, table_name_st
+
+ table_name = f"{dataset_code}"
+ grid_points_fc = f"{table_name}_{point_feature_type.replace(' ', '_')}"
+ grid_points_fcs = f"{table_name}_{point_feature_type.replace(' ', '_')}_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Sample Locations {date_code(project)}"
+ grid_points_fcst = f"{dataset_code.replace('_', ' ')} {point_feature_type} {date_code(project)}"
+
+ __datasets_dict[grid_points_fc] = {"Dataset Service" : grid_points_fcs,
+ "Dataset Service Title" : grid_points_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"This grid points layer provides information on model output amount (i.e., biomass weight catch per unit effort, standardized to kg/ha) of each species that was modeled at each location. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tGRID Points FC: {grid_points_fc}")
+ #print(f"\tGRID Points FCS: {grid_points_fcs}")
+ #print(f"\tGRID Points FCST: {grid_points_fcst}")
+
+ del table_name, grid_points_fc, grid_points_fcs, grid_points_fcst
+
+ dataset_code = f"{dataset_code}_{distribution_project_code}" if distribution_project_code not in dataset_code else dataset_code
+
+ # Bathymetry
+ bathymetry_r = f"{dataset_code}_Bathymetry"
+ bathymetry_rs = f"{dataset_code}_Bathymetry_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Bathymetry {date_code(project)}"
+ bathymetry_rst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {bathymetry_r}")
+
+ __datasets_dict[bathymetry_r] = {"Dataset Service" : bathymetry_rs,
+ "Dataset Service Title" : bathymetry_rst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"The bathymetry dataset represents the ocean depth at that grid cell.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tBathymetry R: {bathymetry_r}")
+ #print(f"\tBathymetry RS: {bathymetry_rs}")
+ #print(f"\tBathymetry RST: {bathymetry_rst}")
+
+ del bathymetry_r, bathymetry_rs, bathymetry_rst
+
+ # Boundary
+ boundary_fc = f"{dataset_code}_Boundary"
+ boundary_fcs = f"{dataset_code}_Boundary_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Boundary {date_code(project)}"
+ boundary_fcst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {boundary_fc}")
+
+ __datasets_dict[boundary_fc] = {"Dataset Service" : boundary_fcs,
+ "Dataset Service Title" : boundary_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"These files contain the spatial boundaries of the NOAA Fisheries Bottom-trawl surveys. This data set covers 8 regions of the United States: Northeast, Southeast, Gulf of Mexico, West Coast, Eastern Bering Sea, Aleutian Islands, Gulf of Alaska, and Hawai'i Islands.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tBoundary FC: {boundary_fc}")
+ #print(f"\tBoundary FCS: {boundary_fcs}")
+ #print(f"\tBoundary FCST: {boundary_fcst}")
+
+ del boundary_fc, boundary_fcs, boundary_fcst
+
+ # Boundary
+ boundary_line_fc = f"{dataset_code}_Boundary_Line"
+ boundary_line_fcs = f"{dataset_code}_Boundary_Line_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Boundary Line {date_code(project)}"
+ boundary_line_fcst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {boundary_line_fc}")
+
+ __datasets_dict[boundary_line_fc] = {"Dataset Service" : boundary_line_fcs,
+ "Dataset Service Title" : boundary_line_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"These files contain the spatial boundaries of the NOAA Fisheries Bottom-trawl surveys. This data set covers 8 regions of the United States: Northeast, Southeast, Gulf of Mexico, West Coast, Eastern Bering Sea, Aleutian Islands, Gulf of Alaska, and Hawai'i Islands.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tBoundary FC: {boundary_line_fc}")
+ #print(f"\tBoundary FCS: {boundary_line_fcs}")
+ #print(f"\tBoundary FCST: {boundary_line_fcst}")
+
+ del boundary_line_fc, boundary_line_fcs, boundary_line_fcst
+
+ # CRF
+ crf_r = f"{dataset_code}_CRF"
+ crf_rs = f"{dataset_code}_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} {dataset_code[dataset_code.rfind('_')+1:]} {date_code(project)}"
+ crf_rst = f"{feature_service_title.replace(' ',' ')}"
+ #del feature_service_title
+
+ #print(f"Processing: {crf_r}")
+ #print(f"\t{crf_rs}")
+ #print(f"\t{feature_service_title}")
+ #print(f"\t{crf_rst}")
+
+ __datasets_dict[crf_r] = {"Dataset Service" : crf_rs,
+ "Dataset Service Title" : crf_rst,
+ "Tags" : tags,
+ "Summary" : f"{summary}. These interpolated biomass layers provide information on the spatial distribution of species caught in the NOAA Fisheries fisheries-independent surveys. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Description" : f"NOAA Fisheries and its partners conduct fisheries-independent surveys in 8 regions in the US (Northeast, Southeast, Gulf of Mexico, West Coast, Gulf of Alaska, Bering Sea, Aleutian Islands, Hawai’i Islands). These surveys are designed to collect information on the seasonal distribution, relative abundance, and biodiversity of fish and invertebrate species found in U.S. waters. Over 400 species of fish and invertebrates have been identified in these surveys.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tCRF R: {crf_r}")
+ #print(f"\tCRF RS: {crf_rs}")
+ #print(f"\tCRF RST: {crf_rst}")
+
+ del crf_r, crf_rs, crf_rst
+
+ # Extent Points
+ extent_points_fc = f"{dataset_code}_Extent_Points"
+ extent_points_fcs = f"{dataset_code}_Extent_Points_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Extent Points {date_code(project)}"
+ extent_points_fcst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {extent_points_fc}")
+
+ __datasets_dict[extent_points_fc] = {"Dataset Service" : extent_points_fcs,
+ "Dataset Service Title" : extent_points_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"The Extent Points layer represents the extent of the model region.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tExtent Points FC: {extent_points_fc}")
+ #print(f"\tExtent Points FCS: {extent_points_fcs}")
+ #print(f"\tExtent Points FCST: {extent_points_fcst}")
+
+ del extent_points_fc, extent_points_fcs, extent_points_fcst
+
+ # Extent Points
+ points_fc = f"{dataset_code}_Points"
+ points_fcs = f"{dataset_code}_Points_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Extent Points {date_code(project)}"
+ points_fcst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {points_fc}")
+
+ __datasets_dict[points_fc] = {"Dataset Service" : points_fcs,
+ "Dataset Service Title" : points_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"The Points layer represents the extent of the model region.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tExtent Points FC: {points_fc}")
+ #print(f"\tExtent Points FCS: {points_fcs}")
+ #print(f"\tExtent Points FCST: {points_fcst}")
+
+ del points_fc, points_fcs, points_fcst
+
+ fishnet_fc = f"{dataset_code}_Fishnet"
+ fishnet_fcs = f"{dataset_code}_Fishnet_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Fishnet {date_code(project)}"
+ fishnet_fcst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {fishnet_fc}")
+
+ __datasets_dict[fishnet_fc] = {"Dataset Service" : fishnet_fcs,
+ "Dataset Service Title" : fishnet_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"The Fishnet is used to create the latitude and longitude rasters.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tFishnet FC: {fishnet_fc}")
+ #print(f"\tFishnet FCS: {fishnet_fcs}")
+ #print(f"\tFishnet FCST: {fishnet_fcst}")
+
+ del fishnet_fc, fishnet_fcs, fishnet_fcst
+
+ indicators_tb = f"{dataset_code}_Indicators"
+ indicators_tbs = f"{dataset_code}_Indicators_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Indicators Table {date_code(project)}"
+ indicators_tbst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {indicators_t}")
+
+ __datasets_dict[indicators_tb] = {"Dataset Service" : indicators_tbs,
+ "Dataset Service Title" : indicators_tbst,
+ "Tags" : tags,
+ "Summary" : f"{summary}. This table provides the key metrics used to evaluate a species distribution shift. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Description" : f"These data contain the key distribution metrics of center of gravity, range limits, and depth for each species in the portal. This data set covers 8 regions of the United States: Northeast, Southeast, Gulf of Mexico, West Coast, Bering Sea, Aleutian Islands, Gulf of Alaska, and Hawai'i Islands.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tIndicators T: {indicators_t}")
+ #print(f"\tIndicators TS: {indicators_ts}")
+ #print(f"\tIndicators TST: {indicators_tst}")
+
+ del indicators_tb, indicators_tbs, indicators_tbst
+
+ lat_long_fc = f"{dataset_code}_Lat_Long"
+ lat_long_fcs = f"{dataset_code}_Lat_Long_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Lat Long {date_code(project)}"
+ lat_long_fcst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {lat_long_fc}")
+
+ __datasets_dict[lat_long_fc] = {"Dataset Service" : lat_long_fcs,
+ "Dataset Service Title" : lat_long_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"The lat_long layer is used to get the latitude & longitude values to create these rasters",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tLat Long FC: {lat_long_fc}")
+ #print(f"\tLat Long FCS: {lat_long_fcs}")
+ #print(f"\tLat Long FCST: {lat_long_fcst}")
+
+ del lat_long_fc, lat_long_fcs, lat_long_fcst
+
+ latitude_r = f"{dataset_code}_Latitude"
+ latitude_rs = f"{dataset_code}_Latitude_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Latitude {date_code(project)}"
+ latitude_rst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {latitude_r}")
+
+ __datasets_dict[latitude_r] = {"Dataset Service" : latitude_rs,
+ "Dataset Service Title" : latitude_rst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"The Latitude raster",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tLatitude R: {latitude_r}")
+ #print(f"\tLatitude RS: {latitude_rs}")
+ #print(f"\tLatitude RST: {latitude_rst}")
+
+ del latitude_r, latitude_rs, latitude_rst
+
+ layer_species_year_image_name_tb = f"{dataset_code}_LayerSpeciesYearImageName"
+ layer_species_year_image_name_tbs = f"{dataset_code}_LayerSpeciesYearImageName_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Layer Species Year Image Name Table {date_code(project)}"
+ layer_species_year_image_name_tbst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {layer_species_year_image_name_tb}")
+
+ __datasets_dict[layer_species_year_image_name_tb] = {"Dataset Service" : layer_species_year_image_name_tbs,
+ "Dataset Service Title" : layer_species_year_image_name_tbst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"Layer Species Year Image Name Table",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tLayerSpeciesYearImageName T: {layer_species_year_image_name_tb}")
+ #print(f"\tLayerSpeciesYearImageName TS: {layer_species_year_image_name_tbs}")
+ #print(f"\tLayerSpeciesYearImageName TST: {layer_species_year_image_name_tbst}")
+
+ del layer_species_year_image_name_tb, layer_species_year_image_name_tbs, layer_species_year_image_name_tbst
+
+ longitude_r = f"{dataset_code}_Longitude"
+ longitude_rs = f"{dataset_code}_Longitude_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Longitude {date_code(project)}"
+ longitude_rst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {longitude_r}")
+
+ __datasets_dict[longitude_r] = {"Dataset Service" : longitude_rs,
+ "Dataset Service Title" : longitude_rst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"The Longitude raster",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tLongitude R: {longitude_r}")
+ #print(f"\tLongitude RS: {longitude_rs}")
+ #print(f"\tLongitude RST: {longitude_rst}")
+
+ del longitude_r, longitude_rs, longitude_rst
+
+ mosaic_r = f"{dataset_code}_Mosaic"
+ mosaic_rs = f"{dataset_code}_Mosaic_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} {dataset_code[dataset_code.rfind('_')+1:]} Mosaic {date_code(project)}"
+ mosaic_rst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {mosaic_r}")
+
+ __datasets_dict[mosaic_r] = {"Dataset Service" : mosaic_rs,
+ "Dataset Service Title" : mosaic_rst,
+ #"Tags" : _tags,
+ "Tags" : tags,
+ "Summary" : f"{summary}. These interpolated biomass layers provide information on the spatial distribution of species caught in the NOAA Fisheries fisheries-independent surveys. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Description" : f"NOAA Fisheries and its partners conduct fisheries-independent surveys in 8 regions in the US (Northeast, Southeast, Gulf of Mexico, West Coast, Gulf of Alaska, Bering Sea, Aleutian Islands, Hawai’i Islands). These surveys are designed to collect information on the seasonal distribution, relative abundance, and biodiversity of fish and invertebrate species found in U.S. waters. Over 400 species of fish and invertebrates have been identified in these surveys.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tMosaic R: {mosaic_r}")
+ #print(f"\tMosaic RS: {mosaic_rs}")
+ #print(f"\tMosaic RST: {mosaic_rst}")
+
+ del mosaic_r, mosaic_rs, mosaic_rst
+
+ crf_r = f"{dataset_code}.crf"
+ crf_rs = f"{dataset_code}_CRF_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} {dataset_code[dataset_code.rfind('_')+1:]} C {date_code(project)}"
+ crf_rst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {mosaic_r}")
+
+ __datasets_dict[crf_r] = {"Dataset Service" : crf_rs,
+ "Dataset Service Title" : crf_rst,
+ #"Tags" : _tags,
+ "Tags" : tags,
+ "Summary" : f"{summary}. These interpolated biomass layers provide information on the spatial distribution of species caught in the NOAA Fisheries fisheries-independent surveys. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Description" : f"NOAA Fisheries and its partners conduct fisheries-independent surveys in 8 regions in the US (Northeast, Southeast, Gulf of Mexico, West Coast, Gulf of Alaska, Bering Sea, Aleutian Islands, Hawai’i Islands). These surveys are designed to collect information on the seasonal distribution, relative abundance, and biodiversity of fish and invertebrate species found in U.S. waters. Over 400 species of fish and invertebrates have been identified in these surveys.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tCFR R: {crf_r}")
+ #print(f"\tCFR RS: {crf_rs}")
+ #print(f"\tCFR RST: {crf_rst}")
+
+ del crf_r, crf_rs, crf_rst
+
+ raster_mask_r = f"{dataset_code}_Raster_Mask"
+ raster_mask_rs = f"{dataset_code}_Raster_Mask_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Raster Mask {date_code(project)}"
+ raster_mask_rst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {raster_mask_r}")
+
+ __datasets_dict[raster_mask_r] = {"Dataset Service" : raster_mask_rs,
+ "Dataset Service Title" : raster_mask_rst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"Raster Mask is used for image production",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tRaster_Mask R: {raster_mask_r}")
+ #print(f"\tRaster_Mask RS: {raster_mask_rs}")
+ #print(f"\tRaster_Mask RST: {raster_mask_rst}")
+
+ del raster_mask_r, raster_mask_rs, raster_mask_rst
+
+ region_fc = f"{dataset_code}_Region"
+ region_fcs = f"{dataset_code}_Region_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Region {date_code(project)}"
+ region_fcst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {region_fc}")
+
+ __datasets_dict[region_fc] = {"Dataset Service" : region_fcs,
+ "Dataset Service Title" : region_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"These files contain the spatial boundaries of the NOAA Fisheries Bottom-trawl surveys. This data set covers 8 regions of the United States: Northeast, Southeast, Gulf of Mexico, West Coast, Bering Sea, Aleutian Islands, Gulf of Alaska, and Hawai'i Islands.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tRegion FC: {region_fc}")
+ #print(f"\tRegion FCS: {region_fcs}")
+ #print(f"\tRegion FCST: {region_fcst}")
+
+ del region_fc, region_fcs, region_fcst
+
+ survey_area_fc = f"{dataset_code}_Survey_Area"
+ survey_area_fcs = f"{dataset_code}_Region_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Region {date_code(project)}"
+ survey_area_fcst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {survey_area_fc}")
+
+ __datasets_dict[survey_area_fc] = {"Dataset Service" : survey_area_fcs,
+ "Dataset Service Title" : survey_area_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"These files contain the spatial boundaries of the NOAA Fisheries Bottom-trawl surveys. This data set covers 8 regions of the United States: Northeast, Southeast, Gulf of Mexico, West Coast, Bering Sea, Aleutian Islands, Gulf of Alaska, and Hawai'i Islands.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tRegion FC: {survey_area_fc}")
+ #print(f"\tRegion FCS: {survey_area_fcs}")
+ #print(f"\tRegion FCST: {survey_area_fcst}")
+
+ del survey_area_fc, survey_area_fcs, survey_area_fcst
+
+
+ del tags
+
+ if not distribution_project_code:
+
+ if "Datasets" == dataset_code:
+
+ #print(f"\tProcessing: Datasets")
+
+ datasets_tb = dataset_code
+ datasets_tbs = f"{dataset_code}_{date_code(project)}"
+ datasets_tbst = f"{dataset_code} {date_code(project)}"
+
+ __datasets_dict[datasets_tb] = {"Dataset Service" : datasets_tbs,
+ "Dataset Service Title" : datasets_tbst,
+ "Tags" : "DisMAP, Datasets",
+ "Summary" : summary,
+ "Description" : "This table functions as a look-up table of vales",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ del datasets_tb, datasets_tbs, datasets_tbst
+
+ elif "DisMAP_Regions" == dataset_code:
+
+ #print(f"\tProcessing: DisMAP_Regions")
+
+ regions_fc = dataset_code
+ regions_fcs = f"{dataset_code}_{date_code(project)}"
+ regions_fcst = f"DisMAP Regions {date_code(project)}"
+
+ __datasets_dict[regions_fc] = {"Dataset Service" : regions_fcs,
+ "Dataset Service Title" : regions_fcst,
+ "Tags" : "DisMAP Regions",
+ "Summary" : summary,
+ "Description" : "These files contain the spatial boundaries of the NOAA Fisheries Bottom-trawl surveys. This data set covers 8 regions of the United States: Northeast, Southeast, Gulf of Mexico, West Coast, Eastern Bering Sea, Aleutian Islands, Gulf of Alaska, and Hawai'i Islands.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ del regions_fc, regions_fcs, regions_fcst
+
+ elif "Indicators" == dataset_code:
+
+ #print(f"\tProcessing: Indicators")
+
+ indicators_tb = f"{dataset_code}"
+ indicators_tbs = f"{dataset_code}_{date_code(project)}"
+ indicators_tbst = f"{dataset_code} {date_code(project)}"
+
+ __datasets_dict[indicators_tb] = {"Dataset Service" : indicators_tbs,
+ "Dataset Service Title" : indicators_tbst,
+ "Tags" : "DisMAP, Indicators",
+ "Summary" : f"{summary}. This table provides the key metrics used to evaluate a species distribution shift. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Description" : f"These data contain the key distribution metrics of center of gravity, range limits, and depth for each species in the portal. This data set covers 8 regions of the United States: Northeast, Southeast, Gulf of Mexico, West Coast, Bering Sea, Aleutian Islands, Gulf of Alaska, and Hawai'i Islands.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ del indicators_tb, indicators_tbs, indicators_tbst
+
+ elif "LayerSpeciesYearImageName" == dataset_code:
+
+ #print(f"\tProcessing: LayerSpeciesYearImageName")
+
+ layer_species_year_image_name_tb = dataset_code
+ layer_species_year_image_name_tbs = f"{dataset_code}_{date_code(project)}"
+ layer_species_year_image_name_tbst = f"Layer Species Year Image Name Table {date_code(project)}"
+
+ #print(f"\tProcessing: {layer_species_year_image_name_tb}")
+
+ __datasets_dict[layer_species_year_image_name_tb] = {"Dataset Service" : layer_species_year_image_name_tbs,
+ "Dataset Service Title" : layer_species_year_image_name_tbst,
+ "Tags" : "DisMAP, Layer Species Year Image Name Table",
+ "Summary" : summary,
+ "Description" : "This table functions as a look-up table of values",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tLayerSpeciesYearImageName T: {layer_species_year_image_name_tb}")
+ #print(f"\tLayerSpeciesYearImageName TS: {layer_species_year_image_name_tbs}")
+ #print(f"\tLayerSpeciesYearImageName TST: {layer_species_year_image_name_tbst}")
+
+ del layer_species_year_image_name_tb, layer_species_year_image_name_tbs, layer_species_year_image_name_tbst
+
+ elif "Species_Filter" == dataset_code:
+
+ #print(f"\tProcessing: Species_Filter")
+
+ species_filter_tb = dataset_code
+ species_filter_tbs = f"{dataset_code}_{date_code(project)}"
+ species_filter_tbst = f"Species Filter Table {date_code(project)}"
+
+ __datasets_dict[species_filter_tb] = {"Dataset Service" : species_filter_tbs,
+ "Dataset Service Title" : species_filter_tbst,
+ "Tags" : "DisMAP, Species Filter Table",
+ "Summary" : summary,
+ "Description" : "This table functions as a look-up table of values",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tLayerSpeciesYearImageName T: {species_filter_tb}")
+ #print(f"\tLayerSpeciesYearImageName TS: {species_filter_tbs}")
+ #print(f"\tLayerSpeciesYearImageName TST: {species_filter_tbst}")
+
+ del species_filter_tb, species_filter_tbs, species_filter_tbst
+
+ elif "DisMAP_Survey_Info" == dataset_code:
+
+ #print(f"\tProcessing: DisMAP_Survey_Info")
+
+ tb = dataset_code
+ tbs = f"{dataset_code}_{date_code(project)}"
+ tbst = f"DisMAP Survey Info Table {date_code(project)}"
+
+ __datasets_dict[tb] = {"Dataset Service" : tbs,
+ "Dataset Service Title" : tbst,
+ "Tags" : "DisMAP; DisMAP Survey Info Table",
+ "Summary" : summary,
+ "Description" : "This table functions as a look-up table of values",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tLayerSpeciesYearImageName T: {tb}")
+ #print(f"\tLayerSpeciesYearImageName TS: {tbs}")
+ #print(f"\tLayerSpeciesYearImageName TST: {tbst}")
+
+ del tb, tbs, tbst
+
+ else:
+ #print(f"\tProcessing: {dataset_code}")
+
+ #table = dataset_code
+ #table_s = f"{dataset_code}_{date_code(project)}"
+ #table_st = f"{table_s.replace('_',' ')} {date_code(project)}"
+ #print(f"\tProcessing: {table_s}")
+ #__datasets_dict[table] = {"Dataset Service" : table_s,
+ # "Dataset Service Title" : table_st,
+ # "Tags" : f"DisMAP, {table}",
+ # "Summary" : summary,
+ # "Description" : "Unknown table",
+ # "Credits" : _credits,
+ # "Access Constraints" : access_constraints}
+
+ #print(f"\tTable: {table}")
+ #print(f"\tTable TS: {table_s}")
+ #print(f"\tTable TST: {table_st}")
+
+ #del table, table_s, table_st
+
+ raise Exception(f"{dataset_code} is missing")
+
+ else:
+ pass
+
+ del summary
+ del point_feature_type, distribution_project_code
+ del filter_region, filter_sub_region, season
+ del dataset_code
+
+ del _credits, access_constraints
+
+ del dataset_codes
+ del project_folder, crf_folder
+ del project, project_gdb
+ except KeyboardInterrupt:
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return __datasets_dict
+ finally:
+ if "__datasets_dict" in locals().keys(): del __datasets_dict
+
+def import_basic_template_xml(dataset_path=""):
+ try:
+ # Import
+ from lxml import etree
+ from io import StringIO, BytesIO
+ from arcpy import metadata as md
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ project_gdb = os.path.dirname(dataset_path)
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{project_folder}\Scratch"
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+
+ import json
+ json_path = rf"{project_folder}\root_dict.json"
+ with open(json_path, "r") as json_file:
+ root_dict = json.load(json_file)
+ del json_file
+ del json_path
+ del json
+
+## #print("Creating the Metadata Dictionary. Please wait!!")
+## metadata_dictionary = dataset_title_dict(project_gdb)
+## #print("Creating the Metadata Dictionary. Completed")
+
+ #print(dataset_path)
+ dataset_name = os.path.basename(dataset_path)
+
+ print(f"Dataset: {dataset_name}")
+
+## xml_file = '''
+##
+##
+##
+##
+## Timothy J Haverland
+##
+##
+## tim.haverland@noaa.gov
+##
+##
+## https://www.fisheries.noaa.gov/about/office-science-and-technology
+##
+##
+##
+##
+##
+##
+##
+##
+## Melissa Ann Karp
+##
+##
+## melissa.karp@noaa.gov
+##
+##
+## https://www.fisheries.noaa.gov/about/office-science-and-technology
+##
+##
+##
+##
+##
+##
+##
+##
+##
+##
+## NMFS Office of Science and Technology
+##
+##
+## tim.haverland@noaa.gov
+##
+##
+## https://www.fisheries.noaa.gov/about/office-science-and-technology
+##
+##
+##
+##
+##
+##
+##
+##
+##
+## John F Kennedy
+##
+##
+## john.f.kennedy@noaa.gov
+##
+##
+## https://www.fisheries.noaa.gov/about/office-science-and-technology
+##
+##
+##
+##
+##
+##
+## '''
+
+## xml_file = '''
+##
+##
+## John F Kennedy
+##
+##
+## john.f.kennedy@noaa.gov
+##
+##
+## https://www.fisheries.noaa.gov/about/office-science-and-technology
+##
+##
+##
+##
+##
+##
+## '''
+
+## # Parse the XML
+## dataset_md = md.Metadata(dataset_path)
+## #dataset_md.synchronize('ALWAYS')
+## #dataset_md.save()
+## in_md = md.Metadata(rf"C:\Users\john.f.kennedy\Documents\ArcGIS\Projects\DisMap\ArcGIS-Analysis-Python\December 1 2024\Export\WC_TRI_IDW.xml")
+## dataset_md.copy(in_md)
+## #dataset_md.importMetadata(rf"C:\Users\john.f.kennedy\Documents\ArcGIS\Projects\DisMap\ArcGIS-Analysis-Python\December 1 2024\Export\WC_TRI_IDW.xml", "ARCGIS_METADATA")
+## #dataset_md.importMetadata(xml_file, "ARCGIS_METADATA")
+## dataset_md.save()
+## dataset_md.synchronize("OVERWRITE")
+## dataset_md.save()
+## #dataset_md.reload()
+## #dataset_md.synchronize("ALWAYS")
+## #dataset_md.save()
+## #dataset_md.reload()
+## del in_md
+## del dataset_md
+## del xml_file
+##
+## tags = "DisMap;"
+## summary = "These data were created as part of the DisMAP project to enable visualization and analysis of changes in fish and invertebrate distributions"
+## description = ""
+## project_credits = "These data were produced by NMFS OST."
+## access_constraints = "***No Warranty*** The user assumes the entire risk related to its use of these data. NMFS is providing these data 'as is' and NMFS disclaims any and all warranties, whether express or implied, including (without limitation) any implied warranties of merchantability or fitness for a particular purpose. No warranty expressed or implied is made regarding the accuracy or utility of the data on any other system or for general or scientific purposes, nor shall the act of distribution constitute any such warranty. It is strongly recommended that careful attention be paid to the contents of the metadata file associated with these data to evaluate dataset limitations, restrictions or intended use. In no event will NMFS be liable to you or to any third party for any direct, indirect, incidental, consequential, special or exemplary damages or lost profit resulting from any use or misuse of these data."
+##
+## dataset_md = md.Metadata(dataset_path)
+## dataset_md.title = f"{dataset_name.replace('_', ' ')}"
+## dataset_md.tags = f"{tags}{dataset_name.replace('_', ' ')};"
+## dataset_md.summary = summary
+## dataset_md.description = f"{description}{dataset_name.replace('_', ' ')}"
+## dataset_md.credits = project_credits
+## dataset_md.accessConstraints = access_constraints
+## dataset_md.save()
+## dataset_md.synchronize("ALWAYS")
+## dataset_md.save()
+## del dataset_md
+##
+## del tags, summary, description, project_credits, access_constraints
+
+
+ # Option #1
+ #empty_md = md.Metadata()
+ #dataset_md = md.Metadata(dataset_path)
+ #dataset_md.copy(empty_md)
+ #dataset_md.save()
+ #del empty_md
+ # Option #2
+ #empty_md = md.Metadata(xml_file)
+ #dataset_md = md.Metadata(dataset_path)
+ #dataset_md.copy(empty_md)
+ #dataset_md.save()
+ #del empty_md
+ # Option #3
+ #
+ #dataset_md = md.Metadata(dataset_path)
+ #dataset_md.copy(in_md)
+ #dataset_md.save()
+ #del in_md
+ # Option #4
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.importMetadata(rf"{project_folder}\metadata_template.xml")
+ dataset_md.save()
+ del dataset_md
+
+ tags = "DisMap;"
+ summary = "These data were created as part of the DisMAP project to enable visualization and analysis of changes in fish and invertebrate distributions"
+ description = ""
+ project_credits = "These data were produced by NMFS OST."
+ access_constraints = "***No Warranty*** The user assumes the entire risk related to its use of these data. NMFS is providing these data 'as is' and NMFS disclaims any and all warranties, whether express or implied, including (without limitation) any implied warranties of merchantability or fitness for a particular purpose. No warranty expressed or implied is made regarding the accuracy or utility of the data on any other system or for general or scientific purposes, nor shall the act of distribution constitute any such warranty. It is strongly recommended that careful attention be paid to the contents of the metadata file associated with these data to evaluate dataset limitations, restrictions or intended use. In no event will NMFS be liable to you or to any third party for any direct, indirect, incidental, consequential, special or exemplary damages or lost profit resulting from any use or misuse of these data."
+
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.title = f"{dataset_name.replace('_', ' ')}"
+ dataset_md.tags = f"{tags}{dataset_name.replace('_', ' ')};"
+ dataset_md.summary = summary
+ dataset_md.description = f"{description}{dataset_name.replace('_', ' ')}"
+ dataset_md.credits = project_credits
+ dataset_md.accessConstraints = access_constraints
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ dataset_md.reload()
+ export_folder = rf"{os.path.dirname(os.path.dirname(dataset_path))}\Export"
+ dataset_md.saveAsXML(rf"{export_folder}\{os.path.basename(dataset_path)}.xml", "REMOVE_ALL_SENSITIVE_INFO")
+ # To parse from a string, use the fromstring() function instead.
+ _tree = etree.parse(rf"{export_folder}\{os.path.basename(dataset_path)}.xml", parser=etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ _root = _tree.getroot()
+ _root[:] = sorted(_root, key=lambda x: root_dict[x.tag])
+ del _root
+ etree.indent(_tree, space='\t')
+ _tree.write(rf"{export_folder}\{os.path.basename(dataset_path)}.xml", encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+ del _tree
+ del export_folder
+ del dataset_md
+
+ del tags, summary, description, project_credits, access_constraints
+
+
+ # Parse the XML
+ dataset_md = md.Metadata(dataset_path)
+ parser = etree.XMLParser(encoding='UTF-8', remove_blank_text=True)
+ target_tree = etree.parse(StringIO(dataset_md.xml), parser=parser)
+ #target_tree = etree.parse(xml_file, parser=parser)
+ target_root = target_tree.getroot()
+ target_root[:] = sorted(target_root, key=lambda x: root_dict[x.tag])
+ etree.indent(target_tree, space='\t')
+ print(etree.tostring(target_tree, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ del parser, dataset_md
+
+ del target_tree, target_root
+
+
+
+## _tree = etree.parse(BytesIO(xml_file), etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## _root = _tree.getroot()
+## distributor = target_root.xpath(f"./distInfo/distributor")
+## if len(distributor) == 0:
+## target_root.xpath(f"./distInfo")[0].insert(distInfo_dict["distributor"], _root)
+## elif len(distributor) == 1:
+## distributor[0].getparent().replace(distributor[0], _root)
+## else:
+## pass
+## del _root, _tree, xml_file
+## #print(f"\n\t{etree.tostring(target_root.xpath(f'./distInfo/distributor')[0], encoding='UTF-8', method='xml', pretty_print=True).decode()}\n")
+## del distributor
+
+
+## mdFileID = target_root.xpath(f"//mdFileID")
+## if mdFileID is not None and len(mdFileID) == 0:
+## _xml = 'gov.noaa.nmfs.inport:'
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## target_root.insert(root_dict['mdFileID'], _root)
+## del _root, _xml
+## elif mdFileID is not None and len(mdFileID) and len(mdFileID[0]) == 0:
+## mdFileID[0].text = "gov.noaa.nmfs.inport:"
+## elif mdFileID is not None and len(mdFileID) and len(mdFileID[0]) == 1:
+## pass
+## #print(etree.tostring(mdFileID[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del mdFileID
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## mdMaint = target_root.xpath(f"//mdMaint")
+## if mdMaint is not None and len(mdMaint) == 0:
+## _xml = ''
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## target_root.insert(root_dict['mdMaint'], _root)
+## del _root, _xml
+## elif mdMaint is not None and len(mdMaint) and len(mdMaint[0]) == 0:
+## target_root.xpath("./mdMaint/maintFreq/MaintFreqCd")[0].attrib["value"] = "009"
+## elif mdMaint is not None and len(mdMaint) and len(mdMaint[0]) == 1:
+## pass #print(etree.tostring(mdMaint[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## else:
+## pass
+## #print(etree.tostring(mdMaint[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del mdMaint
+##
+## # No changes needed below
+## #print(etree.tostring(target_tree, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## etree.indent(target_root, space=' ')
+## dataset_md_xml = etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+##
+## SaveBackXml = False
+## if SaveBackXml:
+## dataset_md = md.Metadata(dataset_path)
+## dataset_md.xml = dataset_md_xml
+## dataset_md.save()
+## dataset_md.synchronize("ALWAYS")
+## dataset_md.save()
+## #dataset_md.reload()
+## del dataset_md
+## else:
+## pass
+## del SaveBackXml
+## del dataset_md_xml
+
+ # Declared Variables
+ del root_dict
+ #del target_tree, target_root
+ #del metadata_dictionary,
+ del dataset_name
+ del project_gdb, project_folder, scratch_folder
+ # Imports
+ del md
+ del etree, StringIO, BytesIO
+ # Function Parameters
+ del dataset_path
+ except KeyboardInterrupt:
+ raise SystemExit
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ raise SystemExit
+ except arcpy.ExecuteError:
+ #traceback.print_exc()
+ arcpy.AddError(arcpy.GetMessages(2))
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ raise SystemExit
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def update_eainfo_xml_elements(dataset_path=""):
+ try:
+ # Imports
+ from lxml import etree
+ from io import StringIO, BytesIO
+ #import copy
+ from arcpy import metadata as md
+ # Project modules
+ #from src.project_tools import pretty_format_xml_file
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ project_gdb = os.path.dirname(dataset_path)
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{project_folder}\Scratch"
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+
+ import json
+ json_path = rf"{project_folder}\root_dict.json"
+ with open(json_path, "r") as json_file:
+ root_dict = json.load(json_file)
+ del json_file
+ del json_path
+ del json
+
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ dataset_md.reload()
+ dataset_md_xml = dataset_md.xml
+ del dataset_md
+
+ # Parse the XML
+ parser = etree.XMLParser(encoding='UTF-8', remove_blank_text=True)
+ target_tree = etree.parse(StringIO(dataset_md_xml), parser=parser)
+ target_root = target_tree.getroot()
+ del parser, dataset_md_xml
+
+ dataset_name = os.path.basename(dataset_path)
+ print(f"Processing Entity Attributes for dataset: '{dataset_name}'")
+
+ # Root
+ mdTimeSt = target_root.find("./mdTimeSt")
+ #print(mdTimeSt)
+ if mdTimeSt is not None:
+ mdTimeSt.getparent().remove(mdTimeSt)
+ else:
+ pass
+ del mdTimeSt
+
+ enttyp = target_root.find("enttyp")
+ if enttyp is not None:
+ enttypd = enttyp.find("enttypd")
+ enttypds = enttyp.find("enttypds")
+ if enttypd is None:
+ _xml = "A collection of geographic features with the same geometry type."
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ enttyp.insert(0, _root)
+ del _root, _xml
+ else:
+ pass
+ if enttypds is None:
+ _xml = "Esri"
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ enttyp.insert(0, _root)
+ del _root, _xml
+ else:
+ pass
+ del enttypds, enttypd
+ else:
+ pass
+ del enttyp
+
+ # Create a list of fields using the ListFields function
+ fields = [f for f in arcpy.ListFields(dataset_path) if f.type not in ["Geometry", "OID"] and f.name not in ["Shape_Area", "Shape_Length"]]
+ for field in fields:
+ attributes = target_root.xpath(f".//attrlabl[text()='{field.name}']/..")
+ if attributes is not None and len(attributes) > 0:
+ for attribute in attributes:
+ #print(attribute)
+ #print(etree.tostring(attribute, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ attrdef = attribute.find("./attrdef/..")
+ if attrdef is None:
+ _xml = f"Definition for: {field.name}"
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ attribute.insert(7, _root)
+ del _root, _xml
+ else:
+ pass
+ attrdefs = attribute.find("./attrdefs/..")
+ if attrdefs is None:
+ _xml = "NMFS OST DisMAP 2025"
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ attribute.insert(8, _root)
+ del _root, _xml
+ else:
+ pass
+ attrdomv = attribute.find("./attrdomv/..")
+ if attrdomv is None:
+ _xml = "None"
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ attribute.insert(8, _root)
+ del _root, _xml
+ else:
+ pass
+ del attrdef, attrdefs, attrdomv
+ del attribute
+ else:
+ pass
+ del attributes
+ del field
+
+ attributes = target_root.xpath(f".//attr")
+ for attribute in attributes:
+ #print(etree.tostring(attribute, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ del attribute
+ del attributes
+ del fields
+
+ # Metadata
+ target_root[:] = sorted(target_root, key=lambda x: root_dict[x.tag])
+
+ # No changes needed below
+ #print(etree.tostring(target_tree, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ etree.indent(target_tree, space=' ')
+ dataset_md_xml = etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+
+ SaveBackXml = True
+ if SaveBackXml:
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.xml = dataset_md_xml
+ dataset_md.save()
+ dataset_md.synchronize("CREATED")
+ dataset_md.save()
+ #_target_tree = etree.parse(StringIO(dataset_md.xml), parser=etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ #_target_tree.write(rf"{export_folder}\{dataset_name}.xml", pretty_print=True)
+ #print(etree.tostring(_target_tree.find("./eainfo"), encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ #del _target_tree
+ del dataset_md
+ else:
+ pass
+ del SaveBackXml
+ del dataset_md_xml
+
+ # Declared Varaiables
+ del dataset_name
+ del target_tree, target_root
+ del project_gdb, project_folder, scratch_folder, root_dict
+ # Imports
+ del md, etree, StringIO, BytesIO
+ # Function Parameters
+ del dataset_path
+ except KeyboardInterrupt:
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def insert_missing_elements(dataset_path):
+ try:
+ from lxml import etree
+ from arcpy import metadata as md
+ from io import BytesIO, StringIO
+ import copy
+
+ project_gdb = os.path.dirname(dataset_path)
+ project_folder = os.path.dirname(project_gdb)
+ project = os.path.basename(os.path.dirname(project_gdb))
+ export_folder = rf"{project_folder}\Export"
+ scratch_folder = rf"{project_folder}\Scratch"
+
+ import json
+ json_path = rf"{project_folder}\root_dict.json"
+ with open(json_path, "r") as json_file:
+ root_dict = json.load(json_file)
+ json_path = rf"{project_folder}\esri_dict.json"
+ with open(json_path, "r") as json_file:
+ esri_dict = json.load(json_file)
+ json_path = rf"{project_folder}\dataIdInfo_dict.json"
+ with open(json_path, "r") as json_file:
+ dataIdInfo_dict = json.load(json_file)
+ json_path = rf"{project_folder}\contact_dict.json"
+ with open(json_path, "r") as json_file:
+ contact_dict = json.load(json_file)
+ json_path = rf"{project_folder}\dqInfo_dict.json"
+ with open(json_path, "r") as json_file:
+ dqInfo_dict = json.load(json_file)
+ json_path = rf"{project_folder}\distInfo_dict.json"
+ with open(json_path, "r") as json_file:
+ distInfo_dict = json.load(json_file)
+ #json_path = rf"{project_folder}\RoleCd_dict.json"
+ #with open(json_path, "r") as json_file:
+ # RoleCd_dict = json.load(json_file)
+ #json_path = rf"{project_folder}\tpCat_dict.json"
+ #with open(json_path, "r") as json_file:
+ # tpCat_dict = json.load(json_file)
+ del json_file
+ del json_path
+ del json
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+ del scratch_folder
+ del project_folder
+ del project_gdb
+
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ # Get contact information
+ contacts_xml = rf"{os.environ['USERPROFILE']}\Documents\ArcGIS\Descriptions\contacts.xml"
+ contacts_xml_tree = etree.parse(contacts_xml, parser=etree.XMLParser(encoding='UTF-8', remove_blank_text=True)) # To parse from a string, use the fromstring() function instead.
+ del contacts_xml
+
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ # Get Prefered contact information
+ mdContact_rpIndName = contact_dict["mdContact"][0]["rpIndName"]
+ mdContact_eMailAdd = contact_dict["mdContact"][0]["eMailAdd"]
+ mdContact_role = contact_dict["mdContact"][0]["role"]
+
+ citRespParty_rpIndName = contact_dict["citRespParty"][0]["rpIndName"]
+ citRespParty_eMailAdd = contact_dict["citRespParty"][0]["eMailAdd"]
+ citRespParty_role = contact_dict["citRespParty"][0]["role"]
+
+ idPoC_rpIndName = contact_dict["idPoC"][0]["rpIndName"]
+ idPoC_eMailAdd = contact_dict["idPoC"][0]["eMailAdd"]
+ idPoC_role = contact_dict["idPoC"][0]["role"]
+
+ distorCont_rpIndName = contact_dict["distorCont"][0]["rpIndName"]
+ distorCont_eMailAdd = contact_dict["distorCont"][0]["eMailAdd"]
+ distorCont_role = contact_dict["distorCont"][0]["role"]
+
+ srcCitatn_rpIndName = contact_dict["srcCitatn"][0]["rpIndName"]
+ srcCitatn_eMailAdd = contact_dict["srcCitatn"][0]["eMailAdd"]
+ srcCitatn_role = contact_dict["srcCitatn"][0]["role"]
+
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ #print(etree.tostring(target_root, encoding='UTF-8', method='xml', pretty_print=True).decode())
+
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ dataset_md.reload()
+ dataset_md_xml = dataset_md.xml
+ del dataset_md
+
+ # Parse the XML
+ parser = etree.XMLParser(encoding='UTF-8', remove_blank_text=True)
+ target_tree = etree.parse(StringIO(dataset_md_xml), parser=parser)
+ target_root = target_tree.getroot()
+ del parser, dataset_md_xml
+
+ CreaDate = target_root.xpath(f"//Esri/CreaDate")[0].text
+ CreaTime = target_root.xpath(f"//Esri/CreaTime")[0].text
+ #print(CreaDate, CreaTime)
+ CreaDateTime = f"{CreaDate[:4]}-{CreaDate[4:6]}-{CreaDate[6:]}T{CreaTime[:2]}:{CreaTime[2:4]}:{CreaTime[4:6]}"
+ #print(f"\tCreaDateTime: {CreaDateTime}")
+ #del CreaDateTime
+ del CreaDate, CreaTime
+ ModDate = target_root.xpath(f"//Esri/ModDate")[0].text
+ ModTime = target_root.xpath(f"//Esri/ModTime")[0].text
+ #print(ModDate, ModTime)
+ ModDateTime = f"{ModDate[:4]}-{ModDate[4:6]}-{ModDate[6:]}T{ModTime[:2]}:{ModTime[2:4]}:{ModTime[4:6]}"
+ #print(f"\tModDateTime: {ModDateTime}")
+ #del ModDateTime
+ del ModDate, ModTime
+
+ dataset_name = os.path.basename(dataset_path)
+ print(f"Processing/Updating elements for dataset: '{dataset_name}'")
+
+ xml_file = b'''
+
+
+ ISO 19139 Metadata Implementation Specification GML3.2
+ ISO19139
+
+
+
+
+
+
+ feature class name
+ feature class name
+ NMFS OST DisMAP
+
+
+
+
+
+
+
+
+
+
+ external
+ 810ad1c47a347c4bf5c88f2ea5077cd5d7a1bdcb
+ Timothy J Haverland
+ NMFS Office of Science and Technology
+ GIS App Developer
+
+
+ 1315 East West Highway
+ Silver Spring
+ MD
+ 20910-3282
+ tim.haverland@noaa.gov
+
+
+ 301-427-8137
+ 301-713-4137
+
+ 0700 - 1800 EST/EDT
+
+ https://www.fisheries.noaa.gov/about/office-science-and-technology
+ REST Service
+ NMFS Office of Science and Technology
+ NOAA Fisheries Office of Science and Technology
+
+
+
+
+
+ Timothy J Haverland
+ True
+
+
+
+
+
+
+
+
+ Global Change Master Directory (GCMD) Science Keywords
+
+
+
+
+
+
+ https://www.fisheries.noaa.gov/inport/help/components/keywords
+ REST Service
+ GCMD
+ Global Change Master Directory (GCMD) Science Keywords
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Global Change Master Directory (GCMD) Location Keywords
+
+
+
+
+
+
+ https://www.fisheries.noaa.gov/inport/help/components/keywords
+ REST Service
+ GCMD
+ Global Change Master Directory (GCMD) Location Keywords
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Global Change Master Directory (GCMD) Temporal Data Resolution Keywords
+
+
+
+
+
+
+ https://www.fisheries.noaa.gov/inport/help/components/keywords
+ REST Service
+ GCMD
+ Global Change Master Directory (GCMD) Temporal Data Resolution Keywords
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Integrated Taxonomic Information System (ITIS)
+
+
+
+
+
+
+ https://www.itits.org
+ REST Service
+ ITIS
+ Integrated Taxonomic Information System (ITIS)
+
+
+
+
+
+
+
+
+
+
+ external
+ c66ffbb333c48d18d81856ec0e0c37ea752bff1a
+ Melissa Ann Karp
+ NMFS Office of Science and Technology
+ Fisheries Science Coordinator
+
+
+ 1315 East West Hwy
+ Silver Spring
+ MD
+ 20910-3282
+ melissa.karp@noaa.gov
+ US
+
+
+ 301-427-8202
+ 301-713-4137
+
+ 0700 - 1800 EST/EDT
+
+ https://www.fisheries.noaa.gov/about/office-science-and-technology
+ REST Service
+ NMFS Office of Science and Technology
+ NOAA Fisheries Office of Science and Technology
+
+
+
+
+
+ Melissa Ann Karp
+ True
+
+
+
+
+
+
+
+
+
+
+
+
+ Data License: CC0-1.0
+ Data License URL: https://creativecommons.org/publicdomain/zero/1.0/
+ Data License Statement: These data were produced by NOAA and are not subject to copyright protection in the United States. NOAA waives any potential copyright and related rights in these data worldwide through the Creative Commons Zero 1.0 Universal Public Domain Dedication (CC0-1.0).
+
+
+
+
+
+ FISMA Low
+
+
+ <DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>***No Warranty*** The user assumes the entire risk related to its use of these data. NMFS is providing these data 'as is' and NMFS disclaims any and all warranties, whether express or implied, including (without limitation) any implied warranties of merchantability or fitness for a particular purpose. No warranty expressed or implied is made regarding the accuracy or utility of the data on any other system or for general or scientific purposes, nor shall the act of distribution constitute any such warranty. It is strongly recommended that careful attention be paid to the contents of the metadata file associated with these data to evaluate dataset limitations, restrictions or intended use. In no event will NMFS be liable to you or to any third party for any direct, indirect, incidental, consequential, special or exemplary damages or lost profit resulting from any use or misuse of these data.</SPAN></P></DIV></DIV></DIV>
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ dataset
+
+
+
+ Based on a review from DisMAP Team all necessary features are present.
+
+
+
+ Conceptual Consistency Report
+
+ NMFS OST DisMAP
+
+
+
+
+
+
+ Based on a review from DisMAP Team all necessary features are present.
+ 1
+
+
+
+
+ Based on a review from DisMAP Team all necessary features are present.
+
+
+
+ Completeness Report
+
+ NMFS OST DisMAP
+
+
+
+
+
+
+ Based on a review from DisMAP Team all necessary features are present.
+ 1
+
+
+
+
+
+
+ Data for 'region' 'version'
+
+
+
+
+ Source Citation for:
+
+ NMFS OST DisMAP
+
+ https://www.fisheries.noaa.gov/about/office-science-and-technology
+ REST Service
+ NMFS Office of Science and Technology
+ NOAA Fisheries Office of Science and Technology
+
+
+
+
+
+
+
+
+
+
+ document
+
+
+
+ external
+ c66ffbb333c48d18d81856ec0e0c37ea752bff1a
+ Melissa Ann Karp
+ NMFS Office of Science and Technology
+ Fisheries Science Coordinator
+
+
+ 1315 East West Hwy
+ Silver Spring
+ MD
+ 20910-3282
+ melissa.karp@noaa.gov
+ US
+
+
+ 301-427-8202
+ 301-713-4137
+
+ 0700 - 1800 EST/EDT
+
+ https://www.fisheries.noaa.gov/about/office-science-and-technology
+ REST Service
+ NMFS Office of Science and Technology
+ NOAA Fisheries Office of Science and Technology
+
+
+
+
+
+ Melissa Ann Karp
+ True
+
+
+
+
+
+
+
+ Geoprocessing Steps for 'region' 'version date'
+
+
+ external
+ c66ffbb333c48d18d81856ec0e0c37ea752bff1a
+ Melissa Ann Karp
+ NMFS Office of Science and Technology
+ Fisheries Science Coordinator
+
+
+ 1315 East West Hwy
+ Silver Spring
+ MD
+ 20910-3282
+ melissa.karp@noaa.gov
+ US
+
+
+ 301-427-8202
+ 301-713-4137
+
+ 0700 - 1800 EST/EDT
+
+ https://www.fisheries.noaa.gov/about/office-science-and-technology
+ REST Service
+ NMFS Office of Science and Technology
+ NOAA Fisheries Office of Science and Technology
+
+
+
+
+
+ Melissa Ann Karp
+ True
+
+
+
+
+
+ external
+ b212accd6134b5457de3ed1debca061419d927ce
+ John F Kennedy
+ NMFS Office of Science and Technology
+ GIS Specialist
+
+
+ 1315 East West Highway
+ Silver Spring
+ MD
+ 20910-3282
+ US
+ john.f.kennedy@noaa.gov
+
+
+ 301-427-8149
+ 301-713-4137
+
+ 0930 - 2030 EST/EDT
+
+ https://www.fisheries.noaa.gov/about/office-science-and-technology
+ REST Service
+ NMFS Office of Science and Technology
+ NOAA Fisheries Office of Science and Technology
+
+
+
+
+
+ John F Kennedy
+ True
+
+
+
+
+
+ DisMAP
+
+
+
+
+
+
+
+ ESRI REST Service
+
+ Uncompressed
+
+
+
+
+ external
+ 579ce2e21b888ac8f6ac1dac30f04cddec7a0d7c
+ NMFS Office of Science and Technology
+ NMFS Office of Science and Technology
+ GIS App Developer
+
+
+ 1315 East West Highway
+ Silver Spring
+ MD
+ 20910-3282
+ US
+ tim.haverland@noaa.gov
+
+
+ 301-427-8137
+ 301-713-4137
+
+ 0700 - 1800 EST/EDT
+
+ https://www.fisheries.noaa.gov/about/office-science-and-technology
+ REST Service
+ NMFS Office of Science and Technology
+ NOAA Fisheries Office of Science and Technology
+
+
+
+
+
+ NMFS Office of Science and Technology (Distributor)
+ True
+
+
+
+
+
+
+ MB
+ 8
+
+ https://services2.arcgis.com/C8EMgrsFcRFL6LrL/arcgis/rest/services/.../FeatureServer
+ ESRI REST Service
+ NMFS Office of Science and Technology
+ Dataset Feature Service
+
+
+
+
+
+
+
+ external
+ b212accd6134b5457de3ed1debca061419d927ce
+ John F Kennedy
+ NMFS Office of Science and Technology
+ GIS Specialist
+
+
+ 1315 East West Highway
+ Silver Spring
+ MD
+ 20910-3282
+ US
+ john.f.kennedy@noaa.gov
+
+
+ 301-427-8149
+ 301-713-4137
+
+ 0930 - 2030 EST/EDT
+
+ https://www.fisheries.noaa.gov/about/office-science-and-technology
+ REST Service
+ NMFS Office of Science and Technology
+ NOAA Fisheries Office of Science and Technology
+
+
+
+
+
+ John F Kennedy (Metadata Author)
+ True
+
+
+
+
+ <.>
+ '''
+
+ source_tree = etree.parse(BytesIO(xml_file), etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ source_root = source_tree.getroot()
+
+ # Merge Target wtih Source
+ target_source_merge = xml_tree_merge(target_root, source_root)
+ #print(etree.tostring(target_source_merge, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ # Merge Source wtih Target
+ source_target_merge = xml_tree_merge(target_source_merge, target_root)
+ #print(etree.tostring(source_target_merge, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ del target_source_merge
+ del source_tree, source_root, xml_file
+
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ #
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ dataset_md_xml = etree.tostring(source_target_merge, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=False)
+
+ SaveBackXml = False
+ if SaveBackXml:
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.xml = dataset_md_xml
+ dataset_md.save()
+ dataset_md.synchronize("CREATED")
+ dataset_md.save()
+ #dataset_md.reload()
+ #dataset_md_xml = dataset_md.xml
+ del dataset_md
+ # Parse the XML
+ #_target_tree = etree.parse(StringIO(dataset_md_xml), parser=etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ #del dataset_md_xml
+ #print(etree.tostring(_target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ #del _target_tree
+ else:
+ pass
+ del SaveBackXml
+ del dataset_md_xml
+
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ #
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ dataset_md.reload()
+ dataset_md_xml = dataset_md.xml
+ del dataset_md
+
+ # Parse the XML
+ parser = etree.XMLParser(encoding='UTF-8', remove_blank_text=True)
+ target_tree = etree.parse(StringIO(dataset_md_xml), parser=parser)
+ target_root = target_tree.getroot()
+ del parser, dataset_md_xml
+
+## target_root.xpath("./distInfo/distFormat/formatName")[0].set('Sync', "FALSE")
+## target_root.xpath("./dataIdInfo/envirDesc")[0].set('Sync', "TRUE")
+
+## for key in root_dict:
+## #print(key)
+## elem = target_root.find(f"./{key}")
+## if elem is not None and len(elem) > 0:
+## #print(elem.tag)
+## pass
+## del elem
+## del key
+##
+## # Root
+## mdTimeSt = target_root.find("./mdTimeSt")
+## #print(mdTimeSt)
+## if mdTimeSt is not None:
+## mdTimeSt.getparent().remove(mdTimeSt)
+## else:
+## pass
+## del mdTimeSt
+
+## # Metadata
+## target_root[:] = sorted(target_root, key=lambda x: root_dict[x.tag])
+## # Esri
+## Esri = target_root.xpath("./Esri")[0]
+## Esri[:] = sorted(Esri, key=lambda x: esri_dict[x.tag])
+## #print(etree.tostring(Esri, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del Esri
+## # dataIdInfo
+## dataIdInfo = target_root.xpath("./dataIdInfo")[0]
+## dataIdInfo[:] = sorted(dataIdInfo, key=lambda x: dataIdInfo_dict[x.tag])
+## #print(etree.tostring(dataIdInfo, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del dataIdInfo
+## # dqInfo
+## dqInfo = target_root.xpath("./dqInfo")[0]
+## dqInfo[:] = sorted(dqInfo, key=lambda x: dqInfo_dict[x.tag])
+## #print(etree.tostring(dqInfo, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del dqInfo
+## # distInfo
+## distInfo = target_root.xpath("./distInfo")[0]
+## distInfo[:] = sorted(distInfo, key=lambda x: distInfo_dict[x.tag])
+## #print(etree.tostring(distInfo, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del distInfo
+ # mdContact
+ # mdLang
+ # mdHrLv
+ # refSysInfo
+ # spatRepInfo
+ # spdoinfo
+ # eainfo
+
+## enttyp = target_root.find("enttyp")
+## if enttyp is not None:
+## enttypd = enttyp.find("enttypd")
+## enttypds = enttyp.find("enttypds")
+## if enttypd is None:
+## _xml = "A collection of geographic features with the same geometry type."
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## enttyp.insert(0, _root)
+## del _root, _xml
+## else:
+## pass
+## if enttypds is None:
+## _xml = "Esri"
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## enttyp.insert(0, _root)
+## del _root, _xml
+## else:
+## pass
+## del enttypds, enttypd
+## else:
+## pass
+## del enttyp
+
+ # idCredit completed
+ #for idCredit in target_root.xpath("./dataIdInfo/idCredit"):
+ # #print(etree.tostring(idCredit, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ # #idCredit.text = "NOAA Fisheries. 2025.."
+ # del idCredit
+ #print(etree.tostring(target_root.xpath("./dataIdInfo/idCredit")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+
+## # resTitle completed
+## resTitle = target_root.xpath("./dataIdInfo/idCitation/resTitle")[0]
+## target_root.xpath("./dqInfo/dataLineage/dataSource/srcCitatn/resTitle")[0].text = resTitle.text
+## #print(f"\tresTitle: {resTitle.text}")
+## #resTitle.text = f""
+## del resTitle
+## resAltTitle = target_root.xpath("./dataIdInfo/idCitation/resAltTitle")[0]
+## target_root.xpath("./dqInfo/dataLineage/dataSource/srcCitatn/resAltTitle")[0].text = resAltTitle.text
+## #print(f"\tresAltTitle: {resAltTitle.text}")
+## #resAltTitle.text = f""
+## del resAltTitle
+## collTitle = target_root.xpath("./dataIdInfo/idCitation/collTitle")[0]
+## #print(f"\tcollTitle: {collTitle.text}")
+## collTitle.text = "NMFS OST DisMAP"
+## target_root.xpath("./dqInfo/dataLineage/dataSource/srcCitatn/collTitle")[0].text = collTitle.text
+## #print(f"\tcollTitle: {collTitle.text}")
+## del collTitle
+
+## resConst = target_root.xpath("./dataIdInfo/resConst")
+## if len(resConst) == 1:
+## xml_file = b'''
+##
+##
+##
+##
+##
+##
+##
+## Data License: CC0-1.0
+##Data License URL: https://creativecommons.org/publicdomain/zero/1.0/
+##Data License Statement: These data were produced by NOAA and are not subject to copyright protection in the United States. NOAA waives any potential copyright and related rights in these data worldwide through the Creative Commons Zero 1.0 Universal Public Domain Dedication (CC0-1.0).
+##
+##
+##
+##
+##
+##
+## FISMA Low
+##
+##
+## <DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>***No Warranty*** The user assumes the entire risk related to its use of these data. NMFS is providing these data 'as is' and NMFS disclaims any and all warranties, whether express or implied, including (without limitation) any implied warranties of merchantability or fitness for a particular purpose. No warranty expressed or implied is made regarding the accuracy or utility of the data on any other system or for general or scientific purposes, nor shall the act of distribution constitute any such warranty. It is strongly recommended that careful attention be paid to the contents of the metadata file associated with these data to evaluate dataset limitations, restrictions or intended use. In no event will NMFS be liable to you or to any third party for any direct, indirect, incidental, consequential, special or exemplary damages or lost profit resulting from any use or misuse of these data.</SPAN></P></DIV></DIV></DIV>
+##
+## '''
+## _tree = etree.parse(BytesIO(xml_file), etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## _root = _tree.getroot()
+## resConst[0].getparent().replace(resConst[0], _root)
+## del _root, _tree, xml_file
+## else:
+## pass
+## del resConst
+
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ # discKeys, themeKeys, placeKeys, tempKeys
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ #searchKeys = target_root.xpath("./dataIdInfo/searchKeys")
+ #for searchKey in searchKeys:
+ # #print(etree.tostring(searchKey, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ # del searchKey
+ #del searchKeys
+## searchKeys = target_root.xpath("./dataIdInfo/searchKeys")
+## for searchKey in searchKeys:
+## #print(etree.tostring(searchKey, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## for keyword in searchKey.xpath("./keyword"):
+## if isinstance(keyword.text, type(None)):
+## keyword.getparent().remove(keyword)
+## else:
+## pass #print(etree.tostring(keyword, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del searchKey
+## del searchKeys
+## target_root.xpath("./dataIdInfo/searchKeys/keyword")[0].text = f"{species_range_dict[dataset_name]['LISTENTITY']}; ESA; range; NMFS"
+##
+## keywords = target_root.xpath("./dataIdInfo/discKeys/keyword")
+## if keywords is not None and len(keywords) and len(keywords[0]) == 0:
+## keyword = target_root.xpath("./dataIdInfo/discKeys/keyword")[0]
+## keyword.text = f"{species_range_dict[dataset_name]['SCIENAME']}"
+## del keyword
+## #elif keywords is not None and len(keywords) and len(keywords[0]) >= 1:
+## # pass
+## del keywords
+## createDate = target_root.xpath("./dataIdInfo/discKeys/thesaName/date/createDate")
+## if createDate is not None and len(createDate) and len(createDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/discKeys/thesaName/date/createDate")[0].text = CreaDateTime
+## elif createDate is not None and len(createDate) and len(createDate[0]) == 1:
+## pass
+## else:
+## pass
+## del createDate
+## pubDate = target_root.xpath("./dataIdInfo/discKeys/thesaName/date/pubDate")
+## if pubDate is not None and len(pubDate) and len(pubDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/discKeys/thesaName/date/pubDate")[0].text = CreaDateTime
+## elif pubDate is not None and len(pubDate) and len(pubDate[0]) == 1:
+## pass
+## else:
+## pass
+## del pubDate
+## reviseDate = target_root.xpath("./dataIdInfo/discKeys/thesaName/date/reviseDate")
+## if reviseDate is not None and len(reviseDate) and len(reviseDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/discKeys/thesaName/date/reviseDate")[0].text = ModDateTime
+## elif reviseDate is not None and len(reviseDate) and len(reviseDate[0]) == 1:
+## pass
+## else:
+## pass
+## del reviseDate
+## resTitle = target_root.xpath("./dataIdInfo/discKeys/thesaName/resTitle")
+## if resTitle is not None and len(resTitle) and len(resTitle[0]) == 0:
+## target_root.xpath("./dataIdInfo/discKeys/thesaName/resTitle")[0].text = "Integrated Taxonomic Information System (ITIS)"
+## elif resTitle is not None and len(resTitle) and len(resTitle[0]) == 1:
+## pass
+## else:
+## pass
+## del resTitle
+## discKeys = target_root.xpath("./dataIdInfo/discKeys")
+## for i in range(0, len(discKeys)):
+## #print(etree.tostring(discKeys[i], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del i
+## del discKeys
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # themeKeys
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## keywords = target_root.xpath("./dataIdInfo/themeKeys/keyword")
+## if keywords is not None and len(keywords) and len(keywords[0]) == 0:
+## new_item_name = target_root.find("./Esri/DataProperties/itemProps/itemName").text
+## keyword = target_root.xpath("./dataIdInfo/themeKeys/keyword")[0]
+## keyword.text = f"{species_range_dict[dataset_name]['COMNAME'].title()}; {species_range_dict[dataset_name]['SCIENAME']}; Endangered Species; NMFS"
+## del keyword, new_item_name
+## elif keywords is not None and len(keywords) and len(keywords[0]) >= 1:
+## pass
+## del keywords
+## createDate = target_root.xpath("./dataIdInfo/themeKeys/thesaName/date/createDate")
+## if createDate is not None and len(createDate) and len(createDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/themeKeys/thesaName/date/createDate")[0].text = CreaDateTime
+## elif createDate is not None and len(createDate) and len(createDate[0]) == 1:
+## pass
+## else:
+## pass
+## del createDate
+## pubDate = target_root.xpath("./dataIdInfo/themeKeys/thesaName/date/pubDate")
+## if pubDate is not None and len(pubDate) and len(pubDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/themeKeys/thesaName/date/pubDate")[0].text = CreaDateTime
+## elif pubDate is not None and len(pubDate) and len(pubDate[0]) == 1:
+## pass
+## else:
+## pass
+## del pubDate
+## reviseDate = target_root.xpath("./dataIdInfo/themeKeys/thesaName/date/reviseDate")
+## if reviseDate is not None and len(reviseDate) and len(reviseDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/themeKeys/thesaName/date/reviseDate")[0].text = ModDateTime
+## elif reviseDate is not None and len(reviseDate) and len(reviseDate[0]) == 1:
+## pass
+## else:
+## pass
+## del reviseDate
+## resTitle = target_root.xpath("./dataIdInfo/themeKeys/thesaName/resTitle")
+## if resTitle is not None and len(resTitle) and len(resTitle[0]) == 0:
+## target_root.xpath("./dataIdInfo/themeKeys/thesaName/resTitle")[0].text = "Global Change Master Directory (GCMD) Science Keyword"
+## elif resTitle is not None and len(resTitle) and len(resTitle[0]) == 1:
+## pass
+## else:
+## pass
+## del resTitle
+## themeKeys = target_root.xpath("./dataIdInfo/themeKeys")
+## for i in range(0, len(themeKeys)):
+## #print(etree.tostring(themeKeys[i], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del i
+## del themeKeys
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # placeKeys
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## keywords = target_root.xpath("./dataIdInfo/placeKeys/keyword")
+## if keywords is not None and len(keywords) and len(keywords[0]) == 0:
+## new_item_name = target_root.find("./Esri/DataProperties/itemProps/itemName").text
+## keyword = target_root.xpath("./dataIdInfo/placeKeys/keyword")[0]
+## keyword.text = f"Enter place/geography keywords for {new_item_name}, separated by a semicolon"
+## del keyword, new_item_name
+## elif keywords is not None and len(keywords) and len(keywords[0]) >= 1:
+## pass
+## del keywords
+## createDate = target_root.xpath("./dataIdInfo/placeKeys/thesaName/date/createDate")
+## if createDate is not None and len(createDate) and len(createDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/placeKeys/thesaName/date/createDate")[0].text = CreaDateTime
+## elif createDate is not None and len(createDate) and len(createDate[0]) == 1:
+## pass
+## else:
+## pass
+## del createDate
+## pubDate = target_root.xpath("./dataIdInfo/placeKeys/thesaName/date/pubDate")
+## if pubDate is not None and len(pubDate) and len(pubDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/placeKeys/thesaName/date/pubDate")[0].text = CreaDateTime
+## elif pubDate is not None and len(pubDate) and len(pubDate[0]) == 1:
+## pass
+## else:
+## pass
+## del pubDate
+## reviseDate = target_root.xpath("./dataIdInfo/placeKeys/thesaName/date/reviseDate")
+## if reviseDate is not None and len(reviseDate) and len(reviseDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/placeKeys/thesaName/date/reviseDate")[0].text = ModDateTime
+## elif reviseDate is not None and len(reviseDate) and len(reviseDate[0]) == 1:
+## pass
+## else:
+## pass
+## del reviseDate
+## resTitle = target_root.xpath("./dataIdInfo/placeKeys/thesaName/resTitle")
+## if resTitle is not None and len(resTitle) and len(resTitle[0]) == 0:
+## target_root.xpath("./dataIdInfo/placeKeys/thesaName/resTitle")[0].text = "Global Change Master Directory (GCMD) Location Keywords"
+## elif resTitle is not None and len(resTitle) and len(resTitle[0]) == 1:
+## pass
+## else:
+## pass
+## del resTitle
+## placeKeys = target_root.xpath("./dataIdInfo/placeKeys")
+## for i in range(0, len(placeKeys)):
+## #print(etree.tostring(placeKeys[i], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del i
+## del placeKeys
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # tempKeys
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## keywords = target_root.xpath("./dataIdInfo/tempKeys/keyword")
+## if keywords is not None and len(keywords) and len(keywords[0]) == 0:
+## new_item_name = target_root.find("./Esri/DataProperties/itemProps/itemName").text
+## keyword = target_root.xpath("./dataIdInfo/tempKeys/keyword")[0]
+## keyword.text = f"Enter temporal keywords (e.g. year, year range, season, etc.) for {new_item_name}, separated by a semicolon"
+## del keyword, new_item_name
+## elif keywords is not None and len(keywords) and len(keywords[0]) >= 1:
+## pass
+## del keywords
+## createDate = target_root.xpath("./dataIdInfo/tempKeys/thesaName/date/createDate")
+## if createDate is not None and len(createDate) and len(createDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/tempKeys/thesaName/date/createDate")[0].text = CreaDateTime
+## elif createDate is not None and len(createDate) and len(createDate[0]) == 1:
+## pass
+## else:
+## pass
+## del createDate
+## pubDate = target_root.xpath("./dataIdInfo/tempKeys/thesaName/date/pubDate")
+## if pubDate is not None and len(pubDate) and len(pubDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/tempKeys/thesaName/date/pubDate")[0].text = CreaDateTime
+## elif pubDate is not None and len(pubDate) and len(pubDate[0]) == 1:
+## pass
+## else:
+## pass
+## del pubDate
+## reviseDate = target_root.xpath("./dataIdInfo/tempKeys/thesaName/date/reviseDate")
+## if reviseDate is not None and len(reviseDate) and len(reviseDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/tempKeys/thesaName/date/reviseDate")[0].text = ModDateTime
+## elif reviseDate is not None and len(reviseDate) and len(reviseDate[0]) == 1:
+## pass
+## else:
+## pass
+## del reviseDate
+## resTitle = target_root.xpath("./dataIdInfo/tempKeys/thesaName/resTitle")
+## if resTitle is not None and len(resTitle) and len(resTitle[0]) == 0:
+## target_root.xpath("./dataIdInfo/tempKeys/thesaName/resTitle")[0].text = "Global Change Master Directory (GCMD) Temporal Data Resolution Keywords"
+## elif resTitle is not None and len(resTitle) and len(resTitle[0]) == 1:
+## pass
+## else:
+## pass
+## del resTitle
+## tempKeys = target_root.xpath("./dataIdInfo/tempKeys")
+## for i in range(0, len(tempKeys)):
+## #print(etree.tostring(tempKeys[i], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del i
+## del tempKeys
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # Data Extent
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## dataExt = target_root.xpath("./dataIdInfo/dataExt")[0]
+## exDesc = dataExt.xpath("//exDesc")
+## if len(exDesc) == 0:
+## _xml = "[Location extent description]. The data represents an approximate distribution of the listed entity based on the best available information from [date of first source] to [date of final species expert review]."
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## dataExt.insert(0, _root)
+## del _root, _xml
+## elif len(exDesc) == 1:
+## exDesc[0].text = "[Location extent description]. The data represents an approximate distribution of the listed entity based on the best available information from [date of first source] to [date of final species expert review]."
+## else:
+## pass
+## del exDesc
+## tempEle = dataExt.xpath("//tempEle")
+## if len(tempEle) == 0:
+## _xml = f' \
+## {CreaDateTime}{ModDateTime} \
+## {ModDateTime}'
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## dataExt.insert(2, _root)
+## del _root, _xml
+## elif len(tempEle) == 1:
+## _xml = f' \
+## {CreaDateTime}{ModDateTime} \
+## {ModDateTime}'
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## tempEle[0].getparent().replace(tempEle[0], _root)
+## del _root, _xml
+## del tempEle
+## del dataExt
+ #dataExt = target_root.xpath("./dataIdInfo/dataExt")
+ #for i in range(0, len(dataExt)):
+ # #print(etree.tostring(dataExt[i], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ # del i
+ #del dataExt
+ #dataExt = target_root.xpath("./dataIdInfo/dataExt")
+ #for i in range(1, len(dataExt)):
+ # dataExt[i].getparent().remove(dataExt[i])
+ # del i
+ #del dataExt
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## #target_root.xpath("./dqInfo/dqScope/scpLvl/ScopeCd")[0].set('value', "005")
+## PresFormCd = target_root.xpath("./dataIdInfo/idCitation/presForm/PresFormCd")[0]
+## fgdcGeoform = target_root.xpath("./dataIdInfo/idCitation/presForm/fgdcGeoform")[0]
+## SpatRepTypCd = target_root.xpath("./dataIdInfo/spatRpType/SpatRepTypCd")[0]
+## PresFormCd.set('Sync', "TRUE")
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # SpatRepTypCd "Empty" "001" (vector) "002" (raster/grid) "003" (tabular)
+## # PresFormCd "005" "003" "011"
+## # fgdcGeoform "vector data" "raster data" "tabular data"
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## datasetSet = target_root.xpath("./dqInfo/dqScope/scpLvlDesc/datasetSet")[0]
+## if SpatRepTypCd.get("value") == "001":
+## PresFormCd.set("value", "005")
+## fgdcGeoform.text = "vector digital data"
+## datasetSet.text = "Vector Digital Data"
+## elif SpatRepTypCd.get("value") == "002":
+## PresFormCd.set("value", "003")
+## fgdcGeoform.text = "raster digital data"
+## datasetSet.text = "Raster Digital Data"
+## elif SpatRepTypCd.get("value") == "003":
+## PresFormCd.set("value", "011")
+## fgdcGeoform.text = "tabular digital data"
+## datasetSet.text = "Tabular Digital Data"
+## else:
+## pass
+## #print("------" * 10)
+## #print(etree.tostring(SpatRepTypCd, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## #print(etree.tostring(target_root.xpath("./dataIdInfo/idCitation/presForm")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## #print(etree.tostring(target_root.xpath("./dqInfo/dqScope")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## #print("------" * 10)
+## del datasetSet, SpatRepTypCd, fgdcGeoform, PresFormCd
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## formatName = target_root.xpath("./distInfo/distFormat/formatName")[0]
+## envirDesc = target_root.xpath("./dataIdInfo/envirDesc")[0]
+## envirDesc.set('Sync', "TRUE")
+## target_root.xpath("./distInfo/distFormat/fileDecmTech")[0].text = "Uncompressed"
+## # # 001 = Vector
+## #''' # 002 = Grid
+## # # 003 = Text Table
+## #'''
+## #format_name_text = ""
+## #try:
+## # GeoObjTypCd = target_root.xpath("./spatRepInfo/VectSpatRep/geometObjs/geoObjTyp/GeoObjTypCd")[0].get("value")
+## # if GeoObjTypCd == "002":
+## # format_name_text = "ESRI File Geodatabase"
+## # del GeoObjTypCd
+## #except:
+## # format_name_text = "ESRI Geodatabase Table"
+## formatName.text = "ESRI REST Service"
+## formatVer_text = str.rstrip(str.lstrip(envirDesc.text))
+## formatVer = target_root.xpath("./distInfo/distFormat/formatVer")[0]
+## formatVer.text = str.rstrip(str.lstrip(formatVer_text))
+## del formatVer_text
+## del envirDesc
+## del formatVer
+## del formatName
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## mdFileID = target_root.xpath(f"//mdFileID")
+## if mdFileID is not None and len(mdFileID) == 0:
+## _xml = 'gov.noaa.nmfs.inport:'
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## target_root.insert(root_dict['mdFileID'], _root)
+## del _root, _xml
+## elif mdFileID is not None and len(mdFileID) and len(mdFileID[0]) == 0:
+## mdFileID[0].text = "gov.noaa.nmfs.inport:"
+## elif mdFileID is not None and len(mdFileID) and len(mdFileID[0]) == 1:
+## pass
+## #print(etree.tostring(mdFileID[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del mdFileID
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## mdMaint = target_root.xpath(f"//mdMaint")
+## if mdMaint is not None and len(mdMaint) == 0:
+## _xml = ''
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## target_root.insert(root_dict['mdMaint'], _root)
+## del _root, _xml
+## elif mdMaint is not None and len(mdMaint) and len(mdMaint[0]) == 0:
+## target_root.xpath("./mdMaint/maintFreq/MaintFreqCd")[0].attrib["value"] = "009"
+## elif mdMaint is not None and len(mdMaint) and len(mdMaint[0]) == 1:
+## pass #print(etree.tostring(mdMaint[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## else:
+## pass
+## #print(etree.tostring(mdMaint[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del mdMaint
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## distorTran = target_root.xpath("//distorTran")
+## for _distorTran in distorTran:
+## _distorTran.tag = "distTranOps"
+## del _distorTran
+## del distorTran
+
+## distTranOps = target_root.xpath("//distTranOps")
+## for i in range(0, len(distTranOps)):
+## if i == 0:
+## xml_file = b'''
+## MB
+## 0
+##
+## https://services2.arcgis.com/C8EMgrsFcRFL6LrL/arcgis/rest/services/.../FeatureServer
+## ESRI REST Service
+## NMFS Office of Science and Technology
+## Dataset Feature Service
+##
+##
+##
+##
+## '''
+## _tree = etree.parse(BytesIO(xml_file), etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## _root = _tree.getroot()
+## distTranOps[i].getparent().replace(distTranOps[i], _root)
+## del _root, _tree, xml_file
+## elif i > 0:
+## distTranOps[i].getparent().remove(distTranOps[i])
+## else:
+## pass
+## del i
+## del distTranOps
+ #print(etree.tostring(target_root.xpath("./distInfo")[0], encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ #print(etree.tostring(target_root.xpath("./Esri/DataProperties/itemProps")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## new_item_name = target_root.find("./Esri/DataProperties/itemProps/itemName").text
+## new_item_name = new_item_name.replace("IDW_Sample_Locations", "Sample_Locations") if "Sample_Locations" in new_item_name else new_item_name
+## onLineSrcs = target_root.findall("./distInfo/distTranOps/onLineSrc")
+## for onLineSrc in onLineSrcs:
+## if onLineSrc.find('./protocol').text == "ESRI REST Service":
+## old_linkage_element = onLineSrc.find('./linkage')
+## old_linkage = old_linkage_element.text
+## #print(old_linkage, flush=True)
+## old_item_name = old_linkage[old_linkage.find("/services/")+len("/services/"):old_linkage.find("/FeatureServer")]
+## new_linkage = old_linkage.replace(old_item_name, f"{new_item_name}_{date_code(project)}")
+## #print(new_linkage, flush=True)
+## old_linkage_element.text = new_linkage
+## #print(old_linkage_element.text, flush=True)
+## del old_linkage_element
+## del old_item_name, old_linkage, new_linkage
+## else:
+## pass
+## del onLineSrc
+## del onLineSrcs, new_item_name
+## #print(etree.tostring(target_root.xpath("./distInfo")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+
+## xml_file = b'''
+##
+## external
+## 579ce2e21b888ac8f6ac1dac30f04cddec7a0d7c
+## NMFS Office of Science and Technology
+## NMFS Office of Science and Technology
+## GIS App Developer
+##
+##
+## 1315 East West Highway
+## Silver Spring
+## MD
+## 20910-3282
+## US
+## tim.haverland@noaa.gov
+##
+##
+## 301-427-8137
+## 301-713-4137
+##
+## 0700 - 1800 EST/EDT
+##
+## https://www.fisheries.noaa.gov/about/office-science-and-technology
+## REST Service
+## NMFS Office of Science and Technology
+## NOAA Fisheries Office of Science and Technology
+##
+##
+##
+##
+##
+## NMFS Office of Science and Technology (Distributor)
+## True
+##
+##
+##
+##
+##
+## '''
+## _tree = etree.parse(BytesIO(xml_file), etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## _root = _tree.getroot()
+## distributor = target_root.xpath(f"./distInfo/distributor")[0]
+## distributor.getparent().replace(distributor, _root)
+## del _root, _tree, xml_file
+## #print(f"\n\t{etree.tostring(target_root.xpath(f'./distInfo/distributor')[0], encoding='UTF-8', method='xml', pretty_print=True).decode()}\n")
+## del distributor
+
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ # statement
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## statement = target_root.xpath("./dqInfo/dataLineage/statement")
+## if statement is not None and len(statement) == 0:
+## pass # Need to insert statement
+## elif statement is not None and len(statement) and len(statement[0]) == 0:
+## target_root.xpath("./dqInfo/dataLineage/statement")[0].text = "Need to update datalienage statement"
+## elif statement is not None and len(statement) and len(statement[0]) == 1:
+## pass
+## elif statement is not None and len(statement) and len(statement[0]) >= 1:
+## pass
+## else:
+## pass
+## #print(f"\n\t{etree.tostring(statement[0], encoding='UTF-8', method='xml', pretty_print=True).decode()}\n")
+## del statement
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # srcDesc
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## srcDesc = target_root.xpath("./dqInfo/dataLineage/dataSource/srcDesc")
+## if srcDesc is not None and len(srcDesc) == 0:
+## pass # Need to insert srcDesc
+## elif srcDesc is not None and len(srcDesc) and len(srcDesc[0]) == 0:
+## target_root.xpath("./dqInfo/dataLineage/dataSource/srcDesc")[0].text = "Need to update srcDesc"
+## elif srcDesc is not None and len(srcDesc) and len(srcDesc[0]) == 1:
+## pass
+## elif srcDesc is not None and len(srcDesc) and len(srcDesc[0]) >= 1:
+## pass
+## else:
+## pass
+## #print(f"\n\t{etree.tostring(srcDesc[0], encoding='UTF-8', method='xml', pretty_print=True).decode()}\n")
+## del srcDesc
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ # prcStep
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## stepProcs = target_root.xpath("./dqInfo/dataLineage/prcStep/stepProc")
+## for stepProc in stepProcs:
+## rpIndName = stepProc.find("rpIndName")
+## if rpIndName is None:
+## stepProc.getparent().remove(stepProc)
+## else:
+## pass
+## del rpIndName
+## #print(f"{etree.tostring(stepProc, encoding='UTF-8', method='xml', pretty_print=True).decode()}")
+## del stepProc
+## del stepProcs
+
+## _report = target_root.xpath(f"./dqInfo/report[@type='DQConcConsis']")
+## if len(_report) == 1:
+## _xml = '''
+## Based on a review from DisMAP Team all necessary features are present.
+##
+##
+##
+## Conceptual Consistency Report
+##
+## NMFS OST DisMAP
+##
+##
+##
+##
+##
+##
+## Based on a review from DisMAP Team all necessary features are present.
+## 1
+##
+##
+## '''
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## #print(f"{etree.tostring(_root, encoding='UTF-8', method='xml', pretty_print=True).decode()}")
+## #raise SystemExit
+## _report[0].getparent().replace(_report[0], _root)
+## del _root, _xml
+## else:
+## pass
+## del _report
+##
+## _report = target_root.xpath(f"./dqInfo/report[@type='DQCompOm']")
+## if len(_report) == 1:
+## _xml = '''
+## Based on a review from DisMAP Team all necessary features are present.
+##
+##
+##
+## Completeness Report
+##
+## NMFS OST DisMAP
+##
+##
+##
+##
+##
+##
+## Based on a review from DisMAP Team all necessary features are present.
+## 1
+##
+##
+## '''
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## _report[0].getparent().replace(_report[0], _root)
+## del _root, _xml
+## else:
+## pass
+## del _report
+
+
+## prcStep = target_root.xpath("./dqInfo/dataLineage/prcStep")
+## #print(len(prcStep))
+## if prcStep is not None and len(prcStep) == 0:
+## pass
+## #print("prcStep missing")
+## elif prcStep is not None and len(prcStep) and len(prcStep[0]) == 0:
+## #print("found empty element prcStep. Now adding content.")
+## target_root.xpath("./dqInfo/dataLineage/prcStep")[0].text = "Update Metadata 2025"
+## elif prcStep is not None and len(prcStep) and len(prcStep[0]) >= 1:
+## for i in range(0, len(prcStep)):
+## stepDesc = prcStep[i].xpath("./stepDesc")[0]
+## if stepDesc.text == "pre-Update Metadata 2025":
+## prcStep[i].xpath("./stepDateTm")[0].text = CreaDateTime
+## elif stepDesc.text == "Update Metadata 2025":
+## prcStep[i].xpath("./stepDateTm")[0].text = ModDateTime
+## elif stepDesc.text not in ["pre-Update Metadata 2025", "Update Metadata 2025"]:
+## prcStep[i].xpath("./stepDateTm")[0].text = CreaDateTime
+## del stepDesc
+## del i
+## else:
+## pass
+## del prcStep
+
+ #srcDesc = target_root.xpath("./dqInfo/dataLineage/dataSource/srcDesc")
+ #for _srcDesc in srcDesc:
+ # #print(f"\t{etree.tostring(_srcDesc, encoding='UTF-8', method='xml', pretty_print=True).decode()}")
+ # del _srcDesc
+ #del srcDesc
+ # print(etree.tostring(reports[0].getparent(), encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #del dataSources
+ #dataLineage = target_root.xpath("./dqInfo/dataLineage")
+ #for i in range(0, len(dataLineage)):
+ # #print(etree.tostring(dataLineage[i], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ # del i
+ #del dataLineage
+ #distInfo = target_root.xpath("./distInfo")[0]
+ #print(etree.tostring(distInfo, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ #del distInfo
+
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # Reorder Elements
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # Metadata
+## target_root[:] = sorted(target_root, key=lambda x: root_dict[x.tag])
+## # Esri
+## Esri = target_root.xpath("./Esri")[0]
+## Esri[:] = sorted(Esri, key=lambda x: esri_dict[x.tag])
+## #print(etree.tostring(Esri, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del Esri
+## # dataIdInfo
+## dataIdInfo = target_root.xpath("./dataIdInfo")[0]
+## dataIdInfo[:] = sorted(dataIdInfo, key=lambda x: dataIdInfo_dict[x.tag])
+## #print(etree.tostring(dataIdInfo, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del dataIdInfo
+## # dqInfo
+## dqInfo = target_root.xpath("./dqInfo")[0]
+## dqInfo[:] = sorted(dqInfo, key=lambda x: dqInfo_dict[x.tag])
+## #print(etree.tostring(dqInfo, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del dqInfo
+## # distInfo
+## distInfo = target_root.xpath("./distInfo")[0]
+## distInfo[:] = sorted(distInfo, key=lambda x: distInfo_dict[x.tag])
+## #print(etree.tostring(distInfo, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del distInfo
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ #print(etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ etree.indent(target_tree, space=' ')
+ dataset_md_xml = etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+
+ SaveBackXml = False
+ if SaveBackXml:
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.xml = dataset_md_xml
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ #_target_tree = etree.parse(StringIO(dataset_md.xml), parser=etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ #_target_tree.write(rf"{export_folder}\{dataset_name}.xml", pretty_print=True)
+ #print(etree.tostring(_target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ #del _target_tree
+ del dataset_md
+ else:
+ pass
+ del SaveBackXml
+ del dataset_md_xml
+
+ # Declared Variables
+ del project
+ del contacts_xml_tree
+ del source_target_merge
+ del export_folder
+ del mdContact_rpIndName, mdContact_eMailAdd, mdContact_role
+ del citRespParty_rpIndName, citRespParty_eMailAdd, citRespParty_role
+ del idPoC_rpIndName, idPoC_eMailAdd, idPoC_role
+ del distorCont_rpIndName, distorCont_eMailAdd, distorCont_role
+ del srcCitatn_rpIndName, srcCitatn_eMailAdd, srcCitatn_role
+ del dataset_name
+ del CreaDateTime, ModDateTime
+ del contact_dict
+ #del RoleCd_dict, tpCat_dict,
+ del dataIdInfo_dict, dqInfo_dict, distInfo_dict, esri_dict, root_dict
+ # Imports
+ del etree, md, BytesIO, StringIO, copy
+ # Declared variables
+ del target_root, target_tree
+ # Function Parameters
+ del dataset_path
+
+ except KeyboardInterrupt:
+ raise SystemExit
+ except SystemExit:
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def add_update_dates(dataset_path=""):
+ try:
+ # Imports
+ from lxml import etree
+ from io import StringIO, BytesIO
+ import copy
+ from arcpy import metadata as md
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ #print(dataset_path)
+ dataset_name = os.path.basename(dataset_path)
+ print(f"Processing Add/Update Dates for dataset: '{dataset_name}'")
+ #print(f"\tDataset Location: {os.path.basename(os.path.dirname(dataset_path))}")
+
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ dataset_md.reload()
+ dataset_md_xml = dataset_md.xml
+ del dataset_md
+
+ # Parse the XML
+ parser = etree.XMLParser(encoding='UTF-8', remove_blank_text=True)
+ target_tree = etree.parse(StringIO(dataset_md_xml), parser=parser)
+ target_root = target_tree.getroot()
+ del parser, dataset_md_xml
+
+ CreaDate = target_root.xpath(f"//Esri/CreaDate")[0].text
+ CreaTime = target_root.xpath(f"//Esri/CreaTime")[0].text
+ #print(CreaDate, CreaTime)
+ CreaDateTime = f"{CreaDate[:4]}-{CreaDate[4:6]}-{CreaDate[6:]}T{CreaTime[:2]}:{CreaTime[2:4]}:{CreaTime[4:6]}"
+ #print(f"\tCreaDateTime: {CreaDateTime}")
+ #del CreaDateTime
+ del CreaDate, CreaTime
+ ModDate = target_root.xpath(f"//Esri/ModDate")[0].text
+ ModTime = target_root.xpath(f"//Esri/ModTime")[0].text
+ #print(ModDate, ModTime)
+ ModDateTime = f"{ModDate[:4]}-{ModDate[4:6]}-{ModDate[6:]}T{ModTime[:2]}:{ModTime[2:4]}:{ModTime[4:6]}"
+ #print(f"\tModDateTime: {ModDateTime}")
+ #del ModDateTime
+ del ModDate, ModTime
+
+ dates = target_tree.xpath(f"//date")
+ count=0
+ count_dates = len(dates)
+ for date in dates:
+ #_date = copy.deepcopy(date)
+ count+=1
+
+ createDate = date.xpath(f"./createDate")
+ #print(f"Element list: '{createDate}'")
+ #print(f"Element count: '{len(createDate)}'")
+ #print(len(createDate[0].text))
+ #print(type(createDate[0].text))
+ if not len(createDate):
+ _xml = f"{CreaDateTime}"
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ date.insert(0, _root)
+ del _root, _xml
+ elif len(createDate) and createDate[0].text is not None:
+ pass
+ #print(f"createDate exists and has content '{createDate[0].text}'")
+ #print(etree.tostring(createDate[0], encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ elif len(createDate) and createDate[0].text is None:
+ #print(f"createDate exists and but does not have content.")
+ createDate[0].text = CreaDateTime
+ date.insert(0, createDate[0])
+ del createDate
+
+ pubDate = date.xpath(f"./pubDate")
+ if not len(pubDate):
+ _xml = f"{CreaDateTime}"
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ date.insert(0, _root)
+ del _root, _xml
+ if len(pubDate) and pubDate[0].text is not None:
+ pass
+ #print(f"pubDate exists and has content '{pubDate[0].text}'")
+ #print(etree.tostring(pubDate[0], encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ elif len(pubDate) and pubDate[0].text is None:
+ #print(f"pubDate exists and but does not have content.")
+ pubDate[0].text = CreaDateTime
+ date.insert(1, pubDate[0])
+ del pubDate
+
+ reviseDate = date.xpath(f"./reviseDate")
+ if not len(reviseDate):
+ _xml = f"{ModDateTime}"
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ date.insert(0, _root)
+ del _root, _xml
+ if len(reviseDate) and reviseDate[0].text is not None:
+ pass
+ #print(f"reviseDate exists and has content '{reviseDate[0].text}'")
+ #print(etree.tostring(reviseDate[0], encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ elif len(reviseDate) and reviseDate[0].text is None:
+ #print(f"reviseDate exists and but does not have content.")
+ reviseDate[0].text = ModDateTime
+ date.insert(2, reviseDate[0])
+ del reviseDate
+
+## if len(createDate) == 0:
+## _xml = f"{CreaDateTime}"
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## date.insert(0, _root)
+## del _root, _xml
+## elif len(createDate) == 1:
+## if createDate[0].text:
+## createDate[0].text = createDate[0].text
+## elif not createDate[0].text:
+## createDate[0].text = CreaDateTime
+## else:
+## pass
+## else:
+## pass
+ #print(etree.tostring(createDate[0], encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+## pubDate = date.xpath(f"./date/pubDate")
+## if len(pubDate) == 0:
+## _xml = f"{CreaDateTime}"
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## date.insert(0, _root)
+## del _root, _xml
+## elif len(pubDate) == 1:
+## if pubDate[0].text:
+## pubDate[0].text = pubDate[0].text
+## elif not pubDate[0].text:
+## pubDate[0].text = CreaDateTime
+## else:
+## pass
+## else:
+## pass
+## del pubDate
+##
+## try:
+## revisedDate = date.xpath(f"./date/revisedDate")[0]
+## revisedDate.tag = "reviseDate"
+## del revisedDate
+## except:
+## pass
+##
+## reviseDate = date.xpath(f"./date/reviseDate")
+## if len(reviseDate) == 0:
+## _xml = f"{CreaDateTime}"
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## date.insert(0, _root)
+## del _root, _xml
+## elif len(reviseDate) == 1:
+## if reviseDate[0].text:
+## reviseDate[0].text = reviseDate[0].text
+## elif not reviseDate[0].text:
+## reviseDate[0].text = ModDateTime
+## else:
+## pass
+## else:
+## pass
+## del reviseDate
+
+## date.getparent().replace(date, _date)
+
+ #print(etree.tostring(date, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+
+ del date
+ del count, count_dates
+ del dates
+
+ dates = target_root.xpath(f"//date")
+ count=0
+ count_dates = len(dates)
+ for date in dates:
+ count+=1
+ #print(f"\tDate: {count} of {count_dates}")
+ #print(f"\t\tCreaDateTime: {CreaDateTime}")
+ #print(f"\t\tModDateTime: {ModDateTime}")
+ #print(date.getroottree().getpath(date))
+ #print(etree.tostring(date, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ del date
+ del count, count_dates
+ del dates
+
+ # No changes needed below
+ #print(etree.tostring(target_tree, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ etree.indent(target_root, space=' ')
+ dataset_md_xml = etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+
+ SaveBackXml = True
+ if SaveBackXml:
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.xml = dataset_md_xml
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ #dataset_md.reload()
+ del dataset_md
+ else:
+ pass
+ del SaveBackXml
+ del dataset_md_xml
+
+ del dataset_name, target_tree, target_root
+
+ # Declared Variables
+ del CreaDateTime, ModDateTime
+ # Imports
+ del etree, StringIO, BytesIO, copy, md
+ # Function Parameters
+ del dataset_path
+
+ except KeyboardInterrupt:
+ raise SystemExit
+ except Exception:
+ traceback.print_exc()
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def basic_metadata_report(dataset_path=""):
+ try:
+ # Imports
+ from lxml import etree
+ from io import StringIO, BytesIO
+ import copy
+ from arcpy import metadata as md
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ project_gdb = os.path.dirname(dataset_path)
+ project_folder = os.path.dirname(project_gdb)
+ project = os.path.basename(os.path.dirname(project_gdb))
+ export_folder = rf"{project_folder}\Export"
+ scratch_folder = rf"{project_folder}\Scratch"
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+ del scratch_folder
+ del project_folder
+ del project_gdb
+
+ #print(dataset_path)
+ dataset_name = os.path.basename(dataset_path)
+ print(f"Reporting on Basic XML Metadata for dataset: '{dataset_name}'")
+ #print(f"\tDataset Location: {os.path.basename(os.path.dirname(dataset_path))}")
+
+ dataset_md = md.Metadata(dataset_path)
+ #dataset_md.synchronize("ALWAYS")
+ #dataset_md.save()
+ #dataset_md.reload()
+ dataset_md_xml = dataset_md.xml
+ del dataset_md
+
+ # Parse the XML
+ parser = etree.XMLParser(encoding='UTF-8', remove_blank_text=True)
+ target_tree = etree.parse(StringIO(dataset_md_xml), parser=parser)
+ target_root = target_tree.getroot()
+ del parser, dataset_md_xml
+
+ dataset_md = md.Metadata(dataset_path)
+ print(f"\tTitle: {dataset_md.title}")
+ print(f"\tSearch Keys: {dataset_md.tags}")
+ print(f"\tSummary: {dataset_md.summary}")
+ print(f"\tDescription: {dataset_md.description}")
+ print(f"\tCredits: {dataset_md.credits}")
+ print(f"\tUse Limits: {dataset_md.accessConstraints}")
+ del dataset_md
+
+ # No changes needed below
+ #print(etree.tostring(target_tree, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ etree.indent(target_root, space=' ')
+ dataset_md_xml = etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+
+ SaveBackXml = True
+ if SaveBackXml:
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.xml = dataset_md_xml
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ #dataset_md.reload()
+ del dataset_md
+ else:
+ pass
+ del SaveBackXml
+ del dataset_md_xml
+
+ # Declared Varaiables
+ del project, export_folder
+ del dataset_name
+ del target_tree, target_root
+ # Imports
+ del etree, StringIO, BytesIO, copy, md
+ # Function Parameters
+ del dataset_path
+ except KeyboardInterrupt:
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def metadata_esri_report(dataset_path=""):
+ try:
+ # Imports
+ from lxml import etree
+ from io import StringIO, BytesIO
+ import copy
+ from arcpy import metadata as md
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ #print(dataset_path)
+ dataset_name = os.path.basename(dataset_path)
+ print(f"Reporting on Esri XML for dataset: '{dataset_name}'")
+ #print(f"\tDataset Location: {os.path.basename(os.path.dirname(dataset_path))}")
+
+ dataset_md = md.Metadata(dataset_path)
+ #dataset_md.synchronize("ALWAYS")
+ #dataset_md.save()
+ #dataset_md.reload()
+ dataset_md_xml = dataset_md.xml
+ del dataset_md
+
+ # Parse the XML
+ parser = etree.XMLParser(encoding='UTF-8', remove_blank_text=True)
+ target_tree = etree.parse(StringIO(dataset_md_xml), parser=parser)
+ target_root = target_tree.getroot()
+ del parser, dataset_md_xml
+
+ if target_root.find("Esri") is not None:
+ Esri = target_root.xpath("./Esri")[0]
+ _xml = '''
+ ISO 19139 Metadata Implementation Specification GML3.2
+ ISO19139
+
+
+
+ '''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ # Merge Target wtih Source
+ target_source_merge = xml_tree_merge(Esri, _root)
+ #print(etree.tostring(target_source_merge, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ # Merge Source wtih Target
+ source_target_merge = xml_tree_merge(target_source_merge, Esri)
+ #print(etree.tostring(source_target_merge, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ Esri.getparent().replace(Esri, source_target_merge)
+ del target_source_merge, source_target_merge
+ del _root, _xml
+ #print(etree.tostring(target_root.find("Esri"), encoding='UTF-8', method='xml', pretty_print=True).decode())
+ del Esri
+ else:
+ pass
+
+ # No changes needed below
+ #print(etree.tostring(target_tree, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ etree.indent(target_root, space=' ')
+ dataset_md_xml = etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+
+ SaveBackXml = True
+ if SaveBackXml:
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.xml = dataset_md_xml
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ #dataset_md.reload()
+ del dataset_md
+ else:
+ pass
+ del SaveBackXml
+ del dataset_md_xml
+
+ # Declared Varaiables
+ del dataset_name
+ del target_tree, target_root
+ # Imports
+ del etree, StringIO, BytesIO, copy, md
+ # Function Parameters
+ del dataset_path
+ except KeyboardInterrupt:
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def metadata_dataidinfo_report(dataset_path=""):
+ try:
+ # Imports
+ from lxml import etree
+ from io import StringIO, BytesIO
+ import copy
+ from arcpy import metadata as md
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ import json
+ json_path = rf"{project_folder}\dataIdInfo_dict.json"
+ with open(json_path, "r") as json_file:
+ dataIdInfo_dict = json.load(json_file)
+ del json_file
+ del json_path
+ json_path = rf"{project_folder}\root_dict.json"
+ with open(json_path, "r") as json_file:
+ root_dict = json.load(json_file)
+ del json_file
+ del json_path
+ del json
+
+ #print(dataset_path)
+ dataset_name = os.path.basename(dataset_path)
+ print(f"Reporting on dataIdInfo XML for dataset: '{dataset_name}'")
+ #print(f"\tDataset Location: {os.path.basename(os.path.dirname(dataset_path))}")
+
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ dataset_md.reload()
+ dataset_md_xml = dataset_md.xml
+ del dataset_md
+
+ # Parse the XML
+ parser = etree.XMLParser(encoding='UTF-8', remove_blank_text=True)
+ target_tree = etree.parse(StringIO(dataset_md_xml), parser=parser)
+ target_root = target_tree.getroot()
+ del parser, dataset_md_xml
+
+ #target_root[:] = sorted(target_root, key=lambda x: root_dict[x.tag])
+ #for child in target_root:
+ # #print(child.tag)
+ # #print(etree.tostring(child, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ # del child
+ # Esri
+ # dataIdInfo
+ # dqInfo
+ # distInfo
+ # mdContact
+ # mdLang
+ # mdChar
+ # mdDateSt
+ # mdHrLv
+ # mdHrLvName
+ # mdFileID
+ # mdMaint
+ # refSysInfo
+ # spatRepInfo
+ # spdoinfo
+ # eainfo
+
+ refSysInfo = target_root.xpath("./refSysInfo")
+ if len(refSysInfo) == 0:
+ pass #print("missing")
+ elif len(refSysInfo) == 1:
+ pass #print(etree.tostring(target_root.xpath("./refSysInfo")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ elif len(refSysInfo) > 1:
+ pass #print("too many")
+ else:
+ pass
+ del refSysInfo
+
+ dqInfo = target_root.xpath("./dqInfo")
+ if len(dqInfo) == 0:
+ _xml = '''
+
+
+
+
+
+ dataset
+
+
+
+ '''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root.insert(root_dict["dqInfo"], _root)
+ del _root, _xml
+ else:
+ pass
+ del dqInfo
+
+ mdHrLv = target_root.xpath("./mdHrLv")
+ if len(mdHrLv) == 0:
+ _xml = '''
+
+
+ '''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root.insert(root_dict["mdHrLv"], _root)
+ del _root, _xml
+ else:
+ pass
+ del mdHrLv
+
+ mdHrLvName = target_root.xpath("./mdHrLvName")
+ if len(mdHrLvName) == 0:
+ _xml = '''dataset'''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root.insert(root_dict["mdHrLvName"], _root)
+ del _root, _xml
+ else:
+ pass
+ del mdHrLvName
+
+ fgdcGeoform = target_root.xpath("./dataIdInfo/idCitation/presForm/fgdcGeoform")
+ if len(fgdcGeoform) == 0:
+ _xml = '''document'''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root.xpath("./dataIdInfo/idCitation/presForm")[0].insert(dataIdInfo_dict["fgdcGeoform"], _root)
+ del _root, _xml
+ else:
+ pass
+ del fgdcGeoform
+
+ #print(etree.tostring(target_root.xpath("./distInfo")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+
+ distInfo = target_root.xpath("./distInfo")
+ if len(distInfo) == 0:
+ _xml = '''
+
+ ESRI REST Service
+
+ Uncompressed
+
+
+
+ '''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root.xpath("./dataIdInfo/idCitation/presForm")[0].insert(dataIdInfo_dict["fgdcGeoform"], _root)
+ del _root, _xml
+ elif len(distInfo) == 1:
+ formatVer = distInfo[0].xpath("./distFormat/formatVer")
+ if len(formatVer) == 0:
+ _xml = ''''''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root.xpath("./distInfo/distFormat")[0].insert(1, _root)
+ del _root, _xml
+ elif len(formatVer) == 1:
+ pass
+ elif len(formatVer) > 1:
+ for i in range(1, len(formatVer)):
+ formatVer[i].getparent().remove(formatVer[i])
+ del i
+ else:
+ pass
+ del formatVer
+
+ fileDecmTech = distInfo[0].xpath("./distFormat/fileDecmTech")
+ if len(fileDecmTech) == 0:
+ _xml = ''''''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root.xpath("./distInfo/distFormat")[0].insert(2, _root)
+ del _root, _xml
+ elif len(fileDecmTech) == 1:
+ pass
+ elif len(fileDecmTech) > 1:
+ for i in range(1, len(fileDecmTech)):
+ fileDecmTech[i].getparent().remove(fileDecmTech[i])
+ del i
+ else:
+ pass
+ del fileDecmTech
+
+ formatInfo = distInfo[0].xpath("./distFormat/formatInfo")
+ if len(formatInfo) == 0:
+ _xml = ''''''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root.xpath("./distInfo/distFormat")[0].insert(2, _root)
+ del _root, _xml
+ elif len(formatInfo) == 1:
+ pass
+ elif len(formatInfo) > 1:
+ for i in range(1, len(formatInfo)):
+ formatInfo[i].getparent().remove(formatInfo[i])
+ del i
+ else:
+ pass
+ del formatInfo
+ else:
+ pass
+ del distInfo
+
+ #print(etree.tostring(target_root.xpath("./mdHrLv")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #print(etree.tostring(target_root.xpath("./mdHrLvName")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #print(etree.tostring(target_root.xpath("./dqInfo/dqScope")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #print(etree.tostring(target_root.xpath("./dataIdInfo/idCitation/presForm")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #print(etree.tostring(target_root.xpath("./distInfo")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+
+ #raise Exception
+ mdHrLvName = target_root.xpath("./mdHrLvName")
+ if len(mdHrLvName) == 0:
+ _xml = '''dataset'''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root.insert(root_dict["mdHrLvName"], _root)
+ del _root, _xml
+ else:
+ pass
+ del mdHrLvName
+
+ target_root.xpath("./dataIdInfo/envirDesc")[0].set('Sync', "TRUE")
+ #target_root.xpath("./dqInfo/dqScope/scpLvl/ScopeCd")[0].set('value', "005")
+ #target_root.xpath("./dqInfo/dqScope/scpLvl/ScopeCd")[0].set('Sync', "TRUE")
+ mdHrLvName = target_root.xpath("./mdHrLvName")[0]
+ ScopeCd = target_root.xpath("./dqInfo/dqScope/scpLvl/ScopeCd")[0]
+ PresFormCd = target_root.xpath("./dataIdInfo/idCitation/presForm/PresFormCd")[0]
+ fgdcGeoform = target_root.xpath("./dataIdInfo/idCitation/presForm/fgdcGeoform")[0]
+ SpatRepTypCd = target_root.xpath("./dataIdInfo/spatRpType/SpatRepTypCd")[0]
+ PresFormCd.set('Sync', "TRUE")
+ #print("------" * 10)
+ #print(etree.tostring(SpatRepTypCd, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #print(etree.tostring(target_root.xpath("./dataIdInfo/idCitation/presForm")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #print(etree.tostring(target_root.xpath("./dqInfo/dqScope")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #print("------" * 10)
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ # SpatRepTypCd "Empty" "001" (vector) "002" (raster/grid) "003" (tabular)
+ # ScopeCd "005" "005" "007"
+ # PresFormCd "005" "003" "011"
+ # fgdcGeoform "vector data" "raster data" "tabular data"
+ # mdHrLvName "vector data" "raster data" "tabular data"
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ datasetSet = target_root.xpath("./dqInfo/dqScope/scpLvlDesc/datasetSet")
+ if len(datasetSet) == 0:
+ _xml = ''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root.xpath("./dqInfo/dqScope")[0].insert(1, _root)
+ del _root, _xml
+ else:
+ pass
+ datasetSet = target_root.xpath("./dqInfo/dqScope/scpLvlDesc/datasetSet")[0]
+ if SpatRepTypCd.get("value") == "001":
+ ScopeCd.set('value', "005")
+ PresFormCd.set("value", "005")
+ fgdcGeoform.text = "vector digital data"
+ datasetSet.text = "Vector Digital Data"
+ mdHrLvName.text = "Vector Digital Data"
+ elif SpatRepTypCd.get("value") == "002":
+ ScopeCd.set('value', "005")
+ PresFormCd.set("value", "003")
+ fgdcGeoform.text = "raster digital data"
+ datasetSet.text = "Raster Digital Data"
+ mdHrLvName.text = "Raster Digital Data"
+ elif SpatRepTypCd.get("value") == "003":
+ ScopeCd.set('value', "007")
+ PresFormCd.set("value", "011")
+ fgdcGeoform.text = "tabular digital data"
+ datasetSet.text = "Tabular Digital Data"
+ mdHrLvName.text = "Tabular Digital Data"
+ else:
+ pass
+ #print("------" * 10)
+ #print(etree.tostring(SpatRepTypCd, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #print(etree.tostring(target_root.xpath("./dataIdInfo/idCitation/presForm")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #print(etree.tostring(target_root.xpath("./dqInfo/dqScope")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #print("------" * 10)
+ del datasetSet, SpatRepTypCd, fgdcGeoform, PresFormCd, ScopeCd, mdHrLvName
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ formatName = target_root.xpath("./distInfo/distFormat/formatName")[0]
+ envirDesc = target_root.xpath("./dataIdInfo/envirDesc")[0]
+ envirDesc.set('Sync', "TRUE")
+ target_root.xpath("./distInfo/distFormat/fileDecmTech")[0].text = "Uncompressed"
+ formatName.text = "ESRI REST Service"
+ formatVer_text = str.rstrip(str.lstrip(envirDesc.text))
+ formatVer = target_root.xpath("./distInfo/distFormat/formatVer")[0]
+ formatVer.text = str.rstrip(str.lstrip(formatVer_text))
+ del formatVer_text
+ del envirDesc
+ del formatVer
+ del formatName
+
+ xml_file = b'''
+ external
+ c66ffbb333c48d18d81856ec0e0c37ea752bff1a
+ Melissa Ann Karp
+ NMFS Office of Science and Technology
+ Fisheries Science Coordinator
+
+
+ 1315 East West Hwy
+ Silver Spring
+ MD
+ 20910-3282
+ melissa.karp@noaa.gov
+ US
+
+
+ 301-427-8202
+ 301-713-4137
+
+ 0700 - 1800 EST/EDT
+
+ https://www.fisheries.noaa.gov/about/office-science-and-technology
+ REST Service
+ NMFS Office of Science and Technology
+ NOAA Fisheries Office of Science and Technology
+
+
+
+
+
+ Melissa Ann Karp
+ True
+
+
+
+
+ '''
+ _tree = etree.parse(BytesIO(xml_file), etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ _root = _tree.getroot()
+ idPoC = target_root.xpath(f"./dataIdInfo/idPoC")
+ if len(idPoC) == 0:
+ target_root.xpath(f"./dataIdInfo")[0].insert(dataIdInfo_dict["idPoC"], _root)
+ elif len(idPoC) == 1:
+ idPoC[0].getparent().replace(idPoC[0], _root)
+ else:
+ pass
+ del _root, _tree, xml_file
+ #print(f"\n\t{etree.tostring(target_root.xpath(f'./dataIdInfo/idPoC')[0], encoding='UTF-8', method='xml', pretty_print=True).decode()}\n")
+ del idPoC
+
+ xml_file = b'''
+ external
+ 579ce2e21b888ac8f6ac1dac30f04cddec7a0d7c
+ NMFS Office of Science and Technology
+ NMFS Office of Science and Technology
+ GIS App Developer
+
+
+ 1315 East West Highway
+ Silver Spring
+ MD
+ 20910-3282
+ US
+ tim.haverland@noaa.gov
+
+
+ 301-427-8137
+ 301-713-4137
+
+ 0700 - 1800 EST/EDT
+
+ https://www.fisheries.noaa.gov/about/office-science-and-technology
+ REST Service
+ NMFS Office of Science and Technology
+ NOAA Fisheries Office of Science and Technology
+
+
+
+
+
+ NMFS Office of Science and Technology (Distributor)
+ True
+
+
+
+
+ '''
+ _tree = etree.parse(BytesIO(xml_file), etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ _root = _tree.getroot()
+ citRespParty = target_root.xpath(f"./dataIdInfo/idCitation/citRespParty")
+ if len(citRespParty) == 0:
+ target_root.xpath(f"./dataIdInfo/idCitation")[0].insert(dataIdInfo_dict["citRespParty"], _root)
+ elif len(citRespParty) == 1:
+ citRespParty[0].getparent().replace(citRespParty[0], _root)
+ else:
+ pass
+ del _root, _tree, xml_file
+ #print(f"\n\t{etree.tostring(target_root.xpath(f'./dataIdInfo/idCitation/citRespParty')[0], encoding='UTF-8', method='xml', pretty_print=True).decode()}\n")
+ del citRespParty
+
+ resConst = target_root.xpath("./dataIdInfo/resConst")
+ if len(resConst) == 1:
+ xml_file = b'''
+
+
+
+
+
+
+
+ Data License: CC0-1.0
+Data License URL: https://creativecommons.org/publicdomain/zero/1.0/
+Data License Statement: These data were produced by NOAA and are not subject to copyright protection in the United States. NOAA waives any potential copyright and related rights in these data worldwide through the Creative Commons Zero 1.0 Universal Public Domain Dedication (CC0-1.0).
+
+
+
+
+
+
+ FISMA Low
+
+
+ <DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>***No Warranty*** The user assumes the entire risk related to its use of these data. NMFS is providing these data 'as is' and NMFS disclaims any and all warranties, whether express or implied, including (without limitation) any implied warranties of merchantability or fitness for a particular purpose. No warranty expressed or implied is made regarding the accuracy or utility of the data on any other system or for general or scientific purposes, nor shall the act of distribution constitute any such warranty. It is strongly recommended that careful attention be paid to the contents of the metadata file associated with these data to evaluate dataset limitations, restrictions or intended use. In no event will NMFS be liable to you or to any third party for any direct, indirect, incidental, consequential, special or exemplary damages or lost profit resulting from any use or misuse of these data.</SPAN></P></DIV></DIV></DIV>
+
+ '''
+ _tree = etree.parse(BytesIO(xml_file), etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ _root = _tree.getroot()
+ resConst[0].getparent().replace(resConst[0], _root)
+ del _root, _tree, xml_file
+ else:
+ pass
+ del resConst
+
+ dataIdInfo = target_root.xpath("./dataIdInfo")
+ for data_Id_Info in dataIdInfo:
+ data_Id_Info[:] = sorted(data_Id_Info, key=lambda x: dataIdInfo_dict[x.tag])
+ #print(etree.tostring(data_Id_Info, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ del data_Id_Info
+ del dataIdInfo
+
+ # No changes needed below
+ #print(etree.tostring(target_tree, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ etree.indent(target_root, space=' ')
+ dataset_md_xml = etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+
+ SaveBackXml = True
+ if SaveBackXml:
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.xml = dataset_md_xml
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ #dataset_md.reload()
+ del dataset_md
+ else:
+ pass
+ del SaveBackXml
+ del dataset_md_xml
+
+ # Declared Varaiables
+ del dataset_name
+ del dataIdInfo_dict, root_dict
+ del target_tree, target_root
+ # Imports
+ del etree, StringIO, BytesIO, copy, md
+ # Function Parameters
+ del dataset_path
+ except KeyboardInterrupt:
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ raise SystemExit
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+ finally:
+ pass
+
+def metadata_dq_info_report(dataset_path=""):
+ try:
+ # Imports
+ from lxml import etree
+ from io import StringIO, BytesIO
+ import copy
+ from arcpy import metadata as md
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ import json
+ json_path = rf"{project_folder}\dqInfo_dict.json"
+ with open(json_path, "r") as json_file:
+ dqInfo_dict = json.load(json_file)
+ del json_file
+ del json_path
+ del json
+
+ export_folder = rf"{os.path.dirname(os.path.dirname(dataset_path))}\Export"
+
+ #print(dataset_path)
+ dataset_name = os.path.basename(dataset_path)
+ print(f"Reporting on dqInfo XML for dataset: '{dataset_name}'")
+ #print(f"\tDataset Location: {os.path.basename(os.path.dirname(dataset_path))}")
+
+ dataset_md = md.Metadata(dataset_path)
+ #dataset_md.synchronize("ALWAYS")
+ #dataset_md.save()
+ #dataset_md.reload()
+ dataset_md_xml = dataset_md.xml
+ del dataset_md
+
+ # Parse the XML
+ parser = etree.XMLParser(encoding='UTF-8', remove_blank_text=True)
+ target_tree = etree.parse(StringIO(dataset_md_xml), parser=parser)
+ target_root = target_tree.getroot()
+ del parser, dataset_md_xml
+
+ # dqInfo
+ dqInfo = target_root.xpath("./dqInfo")[0]
+ dqInfo[:] = sorted(dqInfo, key=lambda x: dqInfo_dict[x.tag])
+ #print(etree.tostring(dqInfo, encoding='UTF-8', method='xml', pretty_print=True).decode())
+
+ #target_root[:] = sorted(target_root, key=lambda x: dqInfo_dict[x.tag])
+ #for child in target_root:
+ # #print(child.tag)
+ # #print(etree.tostring(child, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ # del child
+ # Esri
+ # dataIdInfo
+ # dqInfo
+ # distInfo
+ # mdContact
+ # mdLang
+ # mdChar
+ # mdDateSt
+ # mdHrLv
+ # mdHrLvName
+ # mdFileID
+ # mdMaint
+ # refSysInfo
+ # spatRepInfo
+ # spdoinfo
+ # eainfo
+
+ _report = target_root.xpath(f"./dqInfo/report[@type='DQConcConsis']")
+ if len(_report) == 1:
+ _xml = '''
+ Based on a review from DisMAP Team all necessary features are present.
+
+
+
+ Conceptual Consistency Report
+
+ NMFS OST DisMAP
+
+
+
+
+
+
+ Based on a review from DisMAP Team all necessary features are present.
+ 1
+
+
+ '''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ #print(f"{etree.tostring(_root, encoding='UTF-8', method='xml', pretty_print=True).decode()}")
+ #raise SystemExit
+ _report[0].getparent().replace(_report[0], _root)
+ del _root, _xml
+ else:
+ pass
+ del _report
+
+ _report = target_root.xpath(f"./dqInfo/report[@type='DQCompOm']")
+ if len(_report) == 1:
+ _xml = '''
+ Based on a review from DisMAP Team all necessary features are present.
+
+
+
+ Completeness Report
+
+ NMFS OST DisMAP
+
+
+
+
+
+
+ Based on a review from DisMAP Team all necessary features are present.
+ 1
+
+
+ '''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ _report[0].getparent().replace(_report[0], _root)
+ del _root, _xml
+ else:
+ pass
+ del _report
+
+ dqInfo = target_root.xpath("./dqInfo")
+ #print(len(dqInfo))
+ for dq_Info in dqInfo:
+ dq_Info[:] = sorted(dq_Info, key=lambda x: dqInfo_dict[x.tag])
+ #print(etree.tostring(dq_Info, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ del dq_Info
+ del dqInfo
+
+ #print(len(target_root.xpath("./dqInfo")))
+ for i in range(1, len(target_root.xpath("./dqInfo"))):
+ dq_Info = target_root.xpath("./dqInfo")[i] #.write(rf"{export_folder}\{os.path.basename(dataset_path)} dqInfo.xml", encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+ # Writing to a new file
+ file = open(rf"{export_folder}\{os.path.basename(dataset_path)} dqInfo.xml", "w")
+ file.write(etree.tostring(dq_Info, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ file.close()
+ del file
+ dq_Info.getparent().remove(dq_Info)
+ del dq_Info
+ del i
+
+ # No changes needed below
+ #print(etree.tostring(target_tree, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ etree.indent(target_root, space=' ')
+ dataset_md_xml = etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+
+ SaveBackXml = True
+ if SaveBackXml:
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.xml = dataset_md_xml
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ #dataset_md.reload()
+ del dataset_md
+ else:
+ pass
+ del SaveBackXml
+ del dataset_md_xml
+
+ # Declared Varaiables
+ del export_folder
+ del dataset_name
+ del dqInfo_dict
+ del target_tree, target_root
+ # Imports
+ del etree, StringIO, BytesIO, copy, md
+ # Function Parameters
+ del dataset_path
+ except KeyboardInterrupt:
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def metadata_dist_info_report(dataset_path=""):
+ try:
+ # Imports
+ from lxml import etree
+ from io import StringIO, BytesIO
+ import copy
+ from arcpy import metadata as md
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ import json
+ json_path = rf"{project_folder}\distInfo_dict.json"
+ with open(json_path, "r") as json_file:
+ distInfo_dict = json.load(json_file)
+ del json_file
+ del json_path
+ del json
+
+ project = os.path.basename(os.path.dirname(os.path.dirname(dataset_path)))
+ export_folder = rf"{os.path.dirname(os.path.dirname(dataset_path))}\Export"
+
+ #print(dataset_path)
+ dataset_name = os.path.basename(dataset_path)
+ print(f"Reporting on distInfo XML for dataset: '{dataset_name}'")
+ #print(f"\tDataset Location: {os.path.basename(os.path.dirname(dataset_path))}")
+
+ dataset_md = md.Metadata(dataset_path)
+ #dataset_md.synchronize("ALWAYS")
+ #dataset_md.save()
+ #dataset_md.reload()
+ dataset_md_xml = dataset_md.xml
+ del dataset_md
+
+ # Parse the XML
+ parser = etree.XMLParser(encoding='UTF-8', remove_blank_text=True)
+ target_tree = etree.parse(StringIO(dataset_md_xml), parser=parser)
+ target_root = target_tree.getroot()
+ del parser, dataset_md_xml
+
+ #target_root[:] = sorted(target_root, key=lambda x: distInfo_dict[x.tag])
+ #for child in target_root:
+ # #print(child.tag)
+ # #print(etree.tostring(child, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ # del child
+ # Esri
+ # dataIdInfo
+ # dqInfo
+ # distInfo
+ # mdContact
+ # mdLang
+ # mdChar
+ # mdDateSt
+ # mdHrLv
+ # mdHrLvName
+ # mdFileID
+ # mdMaint
+ # refSysInfo
+ # spatRepInfo
+ # spdoinfo
+ # eainfo
+
+ distorTran = target_root.xpath("//distorTran")
+ for _distorTran in distorTran:
+ _distorTran.tag = "distTranOps"
+ del _distorTran
+ del distorTran
+
+## target_root.xpath("./distInfo/distFormat/formatName")[0].set('Sync', "FALSE")
+## target_root.xpath("./dataIdInfo/envirDesc")[0].set('Sync', "TRUE")
+## #target_root.xpath("./dqInfo/dqScope/scpLvl/ScopeCd")[0].set('value', "005")
+## PresFormCd = target_root.xpath("./dataIdInfo/idCitation/presForm/PresFormCd")[0]
+## fgdcGeoform = target_root.xpath("./dataIdInfo/idCitation/presForm/fgdcGeoform")[0]
+## SpatRepTypCd = target_root.xpath("./dataIdInfo/spatRpType/SpatRepTypCd")[0]
+## PresFormCd.set('Sync', "TRUE")
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # SpatRepTypCd "Empty" "001" (vector) "002" (raster/grid) "003" (tabular)
+## # PresFormCd "005" "003" "011"
+## # fgdcGeoform "vector data" "raster data" "tabular data"
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## datasetSet = target_root.xpath("./dqInfo/dqScope/scpLvlDesc/datasetSet")[0]
+## if SpatRepTypCd.get("value") == "001":
+## PresFormCd.set("value", "005")
+## fgdcGeoform.text = "vector digital data"
+## datasetSet.text = "Vector Digital Data"
+## elif SpatRepTypCd.get("value") == "002":
+## PresFormCd.set("value", "003")
+## fgdcGeoform.text = "raster digital data"
+## datasetSet.text = "Raster Digital Data"
+## elif SpatRepTypCd.get("value") == "003":
+## PresFormCd.set("value", "011")
+## fgdcGeoform.text = "tabular digital data"
+## datasetSet.text = "Tabular Digital Data"
+## else:
+## pass
+## #print("------" * 10)
+## #print(etree.tostring(SpatRepTypCd, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## #print(etree.tostring(target_root.xpath("./dataIdInfo/idCitation/presForm")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## #print(etree.tostring(target_root.xpath("./dqInfo/dqScope")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## #print("------" * 10)
+## del datasetSet, SpatRepTypCd, fgdcGeoform, PresFormCd
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## formatName = target_root.xpath("./distInfo/distFormat/formatName")[0]
+## envirDesc = target_root.xpath("./dataIdInfo/envirDesc")[0]
+## envirDesc.set('Sync', "TRUE")
+## target_root.xpath("./distInfo/distFormat/fileDecmTech")[0].text = "Uncompressed"
+## formatName.text = "ESRI REST Service"
+## formatVer_text = str.rstrip(str.lstrip(envirDesc.text))
+## formatVer = target_root.xpath("./distInfo/distFormat/formatVer")[0]
+## formatVer.text = str.rstrip(str.lstrip(formatVer_text))
+## del formatVer_text
+## del envirDesc
+## del formatVer
+## del formatName
+
+ xml_file = b'''
+
+ external
+ 579ce2e21b888ac8f6ac1dac30f04cddec7a0d7c
+ NMFS Office of Science and Technology
+ NMFS Office of Science and Technology
+ GIS App Developer
+
+
+ 1315 East West Highway
+ Silver Spring
+ MD
+ 20910-3282
+ US
+ tim.haverland@noaa.gov
+
+
+ 301-427-8137
+ 301-713-4137
+
+ 0700 - 1800 EST/EDT
+
+ https://www.fisheries.noaa.gov/about/office-science-and-technology
+ REST Service
+ NMFS Office of Science and Technology
+ NOAA Fisheries Office of Science and Technology
+
+
+
+
+
+ NMFS Office of Science and Technology (Distributor)
+ True
+
+
+
+
+
+ '''
+ _tree = etree.parse(BytesIO(xml_file), etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ _root = _tree.getroot()
+ distributor = target_root.xpath(f"./distInfo/distributor")
+ if len(distributor) == 0:
+ target_root.xpath(f"./distInfo")[0].insert(distInfo_dict["distributor"], _root)
+ elif len(distributor) == 1:
+ distributor[0].getparent().replace(distributor[0], _root)
+ else:
+ pass
+ del _root, _tree, xml_file
+ #print(f"\n\t{etree.tostring(target_root.xpath(f'./distInfo/distributor')[0], encoding='UTF-8', method='xml', pretty_print=True).decode()}\n")
+ del distributor
+
+## new_item_name = target_root.find("./Esri/DataProperties/itemProps/itemName").text
+## new_item_name = new_item_name.replace("IDW_Sample_Locations", "Sample_Locations") if "Sample_Locations" in new_item_name else new_item_name
+## onLineSrcs = target_root.findall("./distInfo/distTranOps/onLineSrc")
+## for onLineSrc in onLineSrcs:
+## if onLineSrc.find('./protocol').text == "ESRI REST Service":
+## old_linkage_element = onLineSrc.find('./linkage')
+## old_linkage = old_linkage_element.text
+## #print(old_linkage, flush=True)
+## old_item_name = old_linkage[old_linkage.find("/services/")+len("/services/"):old_linkage.find("/FeatureServer")]
+## new_linkage = old_linkage.replace(old_item_name, f"{new_item_name}_{date_code(project)}")
+## #print(new_linkage, flush=True)
+## old_linkage_element.text = new_linkage
+## #print(old_linkage_element.text, flush=True)
+## del old_linkage_element
+## del old_item_name, old_linkage, new_linkage
+## else:
+## pass
+## del onLineSrc
+## del onLineSrcs, new_item_name
+## #print(etree.tostring(target_root.xpath("./distInfo")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+
+ new_item_name = target_root.find("./Esri/DataProperties/itemProps/itemName").text
+ if "Sample_Locations" in new_item_name:
+ _new_item_name = new_item_name.replace("IDW_Sample_Locations", "Sample_Locations") if "Sample_Locations" in new_item_name else new_item_name
+ onLineSrcs = target_root.findall("./distInfo/distTranOps/onLineSrc")
+ for onLineSrc in onLineSrcs:
+ onLineSrc.find('./protocol').text = "ESRI REST Service"
+ old_linkage_element = onLineSrc.find('./linkage')
+ old_linkage = old_linkage_element.text
+ #print(old_linkage, flush=True)
+ old_item_name = old_linkage[old_linkage.find("/services/")+len("/services/"):old_linkage.find("/FeatureServer")]
+ if old_item_name != f"{_new_item_name}_{date_code(project)}":
+ #print('remove')
+ onLineSrc.getparent().remove(onLineSrc)
+ else:
+ pass
+ #print(old_item_name, f"{_new_item_name}_{date_code(project)}")
+ #new_linkage = old_linkage.replace(old_item_name, f"{_new_item_name}_{date_code(project)}")
+ #print(new_linkage, flush=True)
+ #old_linkage_element.text = new_linkage
+ #print(old_linkage_element.text, flush=True)
+ del old_linkage_element
+ del old_item_name, old_linkage #, new_linkage
+ #print(etree.tostring(onLineSrc, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #if onLineSrc.find('./protocol').text == "ESRI REST Service":
+ # old_linkage_element = onLineSrc.find('./linkage')
+ # old_linkage = old_linkage_element.text
+ # #print(old_linkage, flush=True)
+ # old_item_name = old_linkage[old_linkage.find("/services/")+len("/services/"):old_linkage.find("/FeatureServer")]
+ # new_linkage = old_linkage.replace(old_item_name, f"{new_item_name}_{date_code(project)}")
+ # #print(new_linkage, flush=True)
+ # old_linkage_element.text = new_linkage
+ # #print(old_linkage_element.text, flush=True)
+ # del old_linkage_element
+ # del old_item_name, old_linkage, new_linkage
+ #else:
+ # pass
+ del onLineSrc
+ #print(_new_item_name)
+ del onLineSrcs, _new_item_name
+ else:
+ pass
+ #print(etree.tostring(target_root.xpath("./distInfo")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #print(new_item_name)
+ del new_item_name
+
+ distInfo = target_root.xpath("./distInfo")
+ #print(len(distInfo))
+ for dist_Info in distInfo:
+ dist_Info[:] = sorted(dist_Info, key=lambda x: distInfo_dict[x.tag])
+ #print(etree.tostring(dist_Info, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ del dist_Info
+ del distInfo
+
+ # No changes needed below
+ #print(etree.tostring(target_tree, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ etree.indent(target_root, space=' ')
+ dataset_md_xml = etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+
+ SaveBackXml = True
+ if SaveBackXml:
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.xml = dataset_md_xml
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ #dataset_md.reload()
+ del dataset_md
+ else:
+ pass
+ del SaveBackXml
+ del dataset_md_xml
+
+ # Declared Varaiables
+ del export_folder, project
+ del dataset_name
+ del distInfo_dict
+ del target_tree, target_root
+ # Imports
+ del etree, StringIO, BytesIO, copy, md
+ # Function Parameters
+ del dataset_path
+ except KeyboardInterrupt:
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def main(project_gdb=""):
+ try:
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ print(f"{'-' * 80}")
+ print(f"Python Script: {os.path.basename(__file__)}")
+ print(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ print(f"Python Version: {sys.version}")
+ print(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ print(f"{'-' * 80}\n")
+
+ #Imports
+ from lxml import etree
+ from io import StringIO, BytesIO
+ #import copy
+ #import arcpy
+ from arcpy import metadata as md
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ # Test if passed workspace exists, if not raise SystemExit
+ if not arcpy.Exists(project_gdb):
+ print(f"{os.path.basename(project_gdb)} is missing!!")
+ print(f"{project_gdb}")
+
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{project_folder}\Scratch"
+ #print(project_folder)
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+
+ #metadata_dictionary = dataset_title_dict(project_gdb)
+ #for key in metadata_dictionary:
+ # print(key, metadata_dictionary[key])
+ # del key
+
+ datasets = list()
+ walk = arcpy.da.Walk(arcpy.env.workspace)
+ for dirpath, dirnames, filenames in walk:
+ for filename in filenames:
+ datasets.append(os.path.join(dirpath, filename))
+ del filename
+ del dirpath, dirnames, filenames
+ del walk
+
+ del scratch_folder, project_folder
+
+ print(f"Processing: {os.path.basename(arcpy.env.workspace)} in the '{inspect.stack()[0][3]}' function")
+
+ # Points
+ #for dataset_path in sorted([ds for ds in datasets if ds.endswith("AI_Sample_Locations") or ds.endswith("AI_IDW_Sample_Locations")]):
+ #for dataset_path in sorted([ds for ds in datasets if ds.endswith("_Sample_Locations")]):
+ #for dataset_path in sorted([ds for ds in datasets if ds.endswith("EBS_Sample_Locations") or ds.endswith("EBS_IDW_Sample_Locations")]):
+ # Polylines
+ #for dataset_path in sorted([ds for ds in datasets if ds.endswith("AI_Boundary") or ds.endswith("AI_IDW_Boundary")]):
+ # Polygons
+ #for dataset_path in sorted([ds for ds in datasets if ds.endswith("AI_Region") or ds.endswith("AI_IDW_Region")]):
+ # Table
+ #for dataset_path in sorted([ds for ds in datasets if ds.endswith("AI_Indicators") or ds.endswith("AI_IDW_Indicators")]):
+ #for dataset_path in sorted([ds for ds in datasets if ds.endswith("Indicators")]):
+ # Raster
+ #for dataset_path in sorted([ds for ds in datasets if ds.endswith("AI_Bathymetry") or ds.endswith("AI_IDW_Bathymetry")]):
+ #for dataset_path in sorted([ds for ds in datasets if ds.endswith("AI_Raster_Mask") or ds.endswith("AI_IDW_Raster_Mask")]):
+ #for dataset_path in sorted([ds for ds in datasets if ds.endswith("AI_Raster_Mosaic") or ds.endswith("AI_IDW_Mosaic")]):
+ #for dataset_path in sorted([ds for ds in datasets if any (ds.endswith(d) for d in ["Datasets", "AI_IDW_Extent_Points", "AI_IDW_Latitude", "AI_IDW_Raster_Mask"])]):
+ #for dataset_path in sorted([ds for ds in datasets if "AI_IDW" in os.path.basename(ds)]):
+ #for dataset_path in sorted([ds for ds in datasets if "EBS_IDW" in os.path.basename(ds)]):
+ #for dataset_path in sorted([ds for ds in datasets if any (ds.endswith(d) for d in ["Species_Filer", "EBS_IDW_Extent_Points", "EBS_IDW_Latitude", "EBS_IDW_Raster_Mask"])]):
+ #for dataset_path in sorted([ds for ds in datasets if any (ds.endswith(d) for d in ['SpeciesPersistenceIndicatorNetWTCPUE', 'SpeciesPersistenceIndicatorPercentileWTCPUE', 'Species_Filter', 'DisMAP_Survey_Info'])]):
+ for dataset_path in sorted([ds for ds in datasets if any (ds.endswith(d) for d in ['Species_Filter'])]):
+
+ # ALL
+ #for dataset_path in sorted(datasets):
+ #dataset_name = os.path.basename(dataset_path)
+ #print(f"Dataset: '{dataset_name}'\n\tType: '{arcpy.Describe(dataset_path).datasetType}'")
+ #del dataset_name
+
+ ImportBasicTemplateXml = True
+ if ImportBasicTemplateXml:
+ import_basic_template_xml(dataset_path)
+ else:
+ pass
+ del ImportBasicTemplateXml
+
+ BasicMetadataReport = False # Just a report
+ if BasicMetadataReport:
+ basic_metadata_report(dataset_path)
+ else:
+ pass
+ del BasicMetadataReport
+
+ MetadataEsriReport = False
+ if MetadataEsriReport:
+ metadata_esri_report(dataset_path)
+ else:
+ pass
+ del MetadataEsriReport
+
+ MetadataDataIdInfoReport = False
+ if MetadataDataIdInfoReport:
+ metadata_dataidinfo_report(dataset_path)
+ else:
+ pass
+ del MetadataDataIdInfoReport
+
+ MetadataDqInfoReport = False
+ if MetadataDqInfoReport:
+ metadata_dq_info_report(dataset_path)
+ else:
+ pass
+ del MetadataDqInfoReport
+
+ MetadataDistInfoReport = False
+ if MetadataDistInfoReport:
+ metadata_dist_info_report(dataset_path)
+ else:
+ pass
+ del MetadataDistInfoReport
+
+ UpdateEaInfoXmlElements = False
+ if UpdateEaInfoXmlElements:
+ update_eainfo_xml_elements(dataset_path)
+ else:
+ pass
+ del UpdateEaInfoXmlElements
+
+ AddUpdateDates = False
+ if AddUpdateDates:
+ add_update_dates(dataset_path)
+ else:
+ pass
+ del AddUpdateDates
+
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ # A keeper. Adds entity attribute details, if missing
+ InsertMissingElements = False
+ if InsertMissingElements:
+ insert_missing_elements(dataset_path)
+ else:
+ pass
+ del InsertMissingElements
+
+ AddUpdateContacts = False
+ if AddUpdateContacts:
+ add_update_contacts(dataset_path=dataset_path)
+ else:
+ pass
+ del AddUpdateContacts
+
+ CreateFeatureClassLayers = False
+ if CreateFeatureClassLayers:
+ create_feature_class_layers(dataset_path=dataset_path)
+ else:
+ pass
+ del CreateFeatureClassLayers
+
+ PrintTargetTree = False
+ if PrintTargetTree:
+ dataset_md = md.Metadata(dataset_path)
+ #dataset_md.synchronize("ALWAYS")
+ #dataset_md.save()
+ #dataset_md.reload()
+ # Parse the XML
+ export_folder = rf"{os.path.dirname(os.path.dirname(dataset_path))}\Export"
+ #print(export_folder)
+ #_target_tree = etree.parse(StringIO(dataset_md.xml), parser=etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ #etree.indent(_target_tree, " ")
+ #_target_tree.write(rf"{export_folder}\{os.path.basename(dataset_path)}.xml", encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+ #print(etree.tostring(_target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ #del _target_tree
+ dataset_md.saveAsXML(rf"{export_folder}\{os.path.basename(dataset_path)}.xml", "TEMPLATE")
+ del export_folder
+ del dataset_md
+ else:
+ pass
+ del PrintTargetTree
+ del dataset_path
+
+ CompactGDB = False
+ if CompactGDB:
+ print(f"Compact GDB")
+ arcpy.management.Compact(project_gdb)
+ print("\t"+arcpy.GetMessages().replace("\n", "\n\t")+"\n")
+ else:
+ pass
+ del CompactGDB
+
+ # Declared Varaiables
+ del datasets
+ # Imports
+ del etree, StringIO, BytesIO, md
+ # Function Parameters
+ del project_gdb
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ print(f"\n{'-' * 80}")
+ print(f"Python script: {os.path.basename(__file__)}\nCompleted: {strftime('%a %b %d %I:%M %p', localtime())}")
+ print(u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time))))
+ print(f"{'-' * 80}")
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+ except KeyboardInterrupt:
+ raise SystemExit
+ except Exception:
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+if __name__ == '__main__':
+ try:
+ # Append the location of this scrip to the System Path
+ sys.path.append(os.path.dirname(os.path.dirname(__file__)))
+ # Imports
+ base_project_folder = rf"{os.path.dirname(os.path.dirname(__file__))}"
+ #project_name = "April 1 2023"
+ #project_name = "July 1 2024"
+ project_name = "December 1 2024"
+ #project_name = "June 1 2025"
+ project_folder = rf"{base_project_folder}\{project_name}"
+ project_gdb = rf"{project_folder}\{project_name}.gdb"
+
+ main(project_gdb=project_gdb)
+
+ # Declared Variables
+ #del collective_title
+ del project_gdb, project_name, project_folder, base_project_folder
+ # Imports
+ except:
+ traceback.print_exc()
+ else:
+ pass
+ finally:
+ pass
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/dev_dismap_tiff_image_archive.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/dev_dismap_tiff_image_archive.py
new file mode 100644
index 0000000..1eb478c
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/dev_dismap_tiff_image_archive.py
@@ -0,0 +1,159 @@
+#---------------------------------------------------------------------------------------
+# Name: module1
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 22/12/2025
+# Copyright: (c) john.f.kennedy 2025
+# Licence:
+#---------------------------------------------------------------------------------------
+import zipfile
+import os
+import sys
+import traceback
+import inspect
+
+import arcpy
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def zip_folder(folder_path="", archive_folder=""):
+ """
+ Creates a zip archive of a given folder and its contents.
+
+ Args:
+ folder_path (str): The path to the folder to be archived.
+ """
+ try:
+ output_zip_path = rf"{archive_folder}\{os.path.basename(folder_path)}.zip"
+ #arcpy.AddMessage(output_zip_path)
+ arcpy.AddMessage(f"\t\t\t\t../{'/'.join(output_zip_path.split(os.sep)[-3:])}")
+
+ with zipfile.ZipFile(output_zip_path, 'w', zipfile.ZIP_DEFLATED) as zipf:
+ for root, dirs, files in os.walk(folder_path):
+ for file in files:
+ file_path = os.path.join(root, file)
+ # Calculate the relative path within the zip archive
+ arcname = os.path.relpath(file_path, folder_path)
+ #arcpy.AddMessage(arcname)
+ zipf.write(file_path, arcname)
+ for dir_name in dirs:
+ # Add empty directories to the archive
+ dir_path = os.path.join(root, dir_name)
+ arcname = os.path.relpath(dir_path, folder_path)
+ #arcpy.AddMessage(arcname)
+ # Ensure directory entries end with a slash in the archive
+ if not arcname.endswith('/'):
+ arcname += '/'
+ zipf.writestr(zipfile.ZipInfo(arcname), '')
+
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except:# noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ sys.exit()
+ return False
+ else:
+ return True
+
+def main(base_folder="", versions="", archive_folder=""):
+ try:
+ import dismap_tools
+ arcpy.AddMessage(f"Home Folder: {os.path.basename(base_folder)} in '{inspect.stack()[0][3]}'")
+ arcpy.AddMessage(f"Versions: {', '.join(versions)} in '{inspect.stack()[0][3]}'")
+
+ for version in versions:
+ image_folder = rf"{base_folder}\{version}\Images"
+ _archive_folder = os.path.join(archive_folder, f"DisMAP_{dismap_tools.date_code(version)}", "results\\raster")
+ #arcpy.AddMessage(f"\tImage Folder: {os.path.basename(image_folder)} in '{inspect.stack()[0][3]}'")
+ arcpy.AddMessage(f"\tImage Folder: {os.path.basename(image_folder)}")
+ arcpy.AddMessage(f"\t\tArchive Folder: {os.path.basename(_archive_folder)}")
+
+ for entry in os.scandir(image_folder):
+ if entry.is_dir():
+ arcpy.AddMessage(f"\t\t\tInput Folder: {os.path.basename(entry.path)}")
+ zip_folder(entry.path, _archive_folder)
+ else:
+ pass
+ del entry
+ del image_folder, version, _archive_folder
+
+ # Delete Declared Varibales
+ # Delete Functions Parameters
+ del base_folder, versions, archive_folder
+ # Imports
+ del dismap_tools
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except:# noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == '__main__':
+ try:
+ base_folder = arcpy.GetParameterAsText(0)
+ versions = arcpy.GetParameterAsText(1)
+ archive_folder = arcpy.GetParameterAsText(2)
+
+ if not base_folder:
+ base_folder = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMap\\ArcGIS-Analysis-Python")
+ else:
+ arcpy.AddMessage(f"Home Folder: {os.path.basename(base_folder)}")
+
+ if not versions:
+ #versions = ["April 1 2023", "July 1 2024", "August 1 2025",]
+ #versions = ["April 1 2023"]
+ versions = ["February 1 2026"]
+ else:
+ arcpy.AddMessage(f"Versions: {', '.join(versions)}")
+
+ if not archive_folder:
+ archive_folder = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMap\\ArcGIS-Analysis-Python\\NCEI Archive")
+ else:
+ arcpy.AddMessage(f"Home Folder: {os.path.basename(base_folder)}")
+
+ result = main(base_folder=base_folder, versions=versions, archive_folder=archive_folder)
+
+ if result:
+ arcpy.SetParameterAsText(3, result)
+ del result
+
+ # Clean-up declared variables
+ del base_folder, versions, archive_folder
+
+ except: # noqa: E722
+ traceback.print_exc()
+ else:
+ pass
+ finally:
+ sys.exit()
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/dev_dismap_vector_archive.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/dev_dismap_vector_archive.py
new file mode 100644
index 0000000..9acf96b
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/dev_dismap_vector_archive.py
@@ -0,0 +1,237 @@
+#---------------------------------------------------------------------------------------
+# Name: module1
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 22/12/2025
+# Copyright: (c) john.f.kennedy 2025
+# Licence:
+#---------------------------------------------------------------------------------------
+import zipfile
+import os
+import inspect
+
+import arcpy
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def dis_map_archive_folders(home_folder="", versions="", archive_folder=""):
+ try:
+ import dismap_tools
+
+ arcpy.env.overwriteOutput = True
+
+ #arcpy.AddMessage(f"Home Folder: {os.path.basename(home_folder)} in '{inspect.stack()[0][3]}'")
+ #arcpy.AddMessage(f"Versions: {', '.join(versions)} in '{inspect.stack()[0][3]}'")
+
+ for version in versions:
+ project_gdb = rf"{home_folder}\{version}\{version}.gdb"
+ _archive_folder = os.path.join(archive_folder, f"DisMAP_{dismap_tools.date_code(version)}")
+ #archive_gdb = rf"{_archive_folder}\DisMAP_{dismap_tools.date_code(version)}.gpkg"
+
+ archive_folders = ["initial", "results/vector-tabular/metadata", "results/raster"]
+
+ #arcpy.AddMessage(f"\tProject GDB: {os.path.basename(project_gdb)} in '{inspect.stack()[0][3]}'")
+ #arcpy.AddMessage(f"\tProject GDB: {project_gdb}")
+ #arcpy.AddMessage(f"\t\tArchive Folder: {_archive_folder}")
+ for archiveFolder in archive_folders:
+ archiveFolder_path = os.path.abspath(os.path.join(archive_folder, f"DisMAP_{dismap_tools.date_code(version)}", archiveFolder))
+ #arcpy.AddMessage(f"\t\t\tFolder: {archiveFolder_path}")
+ if not os.path.isdir(archiveFolder_path):
+ os.makedirs(archiveFolder_path)
+ else:
+ pass
+ pass
+
+ del archiveFolder_path, archiveFolder
+ del archive_folders
+
+ del project_gdb, _archive_folder
+ del version
+
+ # Delete Declared Varibales
+ # Delete Functions Parameters
+ del home_folder, versions, archive_folder
+ # Imports
+ del dismap_tools
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except:# noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def zip_folder(folder_path="", archive_folder=""):
+ """
+ Creates a zip archive of a given folder and its contents.
+
+ Args:
+ folder_path (str): The path to the folder to be archived.
+ """
+ output_zip_path = rf"{archive_folder}\{os.path.basename(folder_path)}.zip"
+
+ with zipfile.ZipFile(output_zip_path, 'w', zipfile.ZIP_DEFLATED) as zipf:
+ for root, dirs, files in os.walk(folder_path):
+ for file in files:
+ file_path = os.path.join(root, file)
+ # Calculate the relative path within the zip archive
+ arcname = os.path.relpath(file_path, folder_path)
+ #arcpy.AddMessage(arcname)
+ zipf.write(file_path, arcname)
+ for dir_name in dirs:
+ # Add empty directories to the archive
+ dir_path = os.path.join(root, dir_name)
+ arcname = os.path.relpath(dir_path, folder_path)
+ #arcpy.AddMessage(arcname)
+ # Ensure directory entries end with a slash in the archive
+ if not arcname.endswith('/'):
+ arcname += '/'
+ zipf.writestr(zipfile.ZipInfo(arcname), '')
+
+
+def main(home_folder="", versions="", archive_folder=""):
+ try:
+ import dismap_tools
+ from arcpy import metadata as md
+
+ arcpy.env.overwriteOutput = True
+
+ dis_map_archive_folders(home_folder=home_folder, versions=versions, archive_folder=archive_folder)
+
+ #archive_folders = ["initial", "results/vector-tabular/metadata", "results/raster"]
+ #archiveFolder_path = os.path.abspath(os.path.join(archive_folder, f"DisMAP_{dismap_tools.date_code(version)}", archiveFolder))
+
+ arcpy.AddMessage(f"Home Folder: {os.path.basename(home_folder)} in '{inspect.stack()[0][3]}'")
+ arcpy.AddMessage(f"Versions: {', '.join(versions)} in '{inspect.stack()[0][3]}'")
+
+ for version in versions:
+ project_gdb = rf"{home_folder}\{version}\{version}.gdb"
+ _archive_folder = os.path.abspath(os.path.join(archive_folder, f"DisMAP_{dismap_tools.date_code(version)}", "results/vector-tabular"))
+ archive_gdb = os.path.abspath(rf"{_archive_folder}\DisMAP_{dismap_tools.date_code(version)}.gpkg")
+
+ #arcpy.AddMessage(f"\tProject GDB: {os.path.basename(project_gdb)} in '{inspect.stack()[0][3]}'")
+ arcpy.AddMessage(f"\tProject GDB: {os.path.basename(project_gdb)}")
+ arcpy.AddMessage(f"\t\tArchive Folder: {os.path.basename(_archive_folder)}")
+
+ arcpy.env.workspace = project_gdb
+
+ arcpy.management.CreateSQLiteDatabase(out_database_name=archive_gdb, spatial_type="GEOPACKAGE")
+
+ archive_tbs = [
+ "DisMAP_Survey_Info",
+ "Indicators",
+ "SpeciesPersistenceIndicatorPercentileBin",
+ "SpeciesPersistenceIndicatorTrend",
+ "Species_Filter",
+ ]
+ # fc_md.exportMetadata("C:\\Users\\john.f.kennedy\\Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\NCEI Archive\fc_md.xml", "ISO19139_GML32", 'REMOVE_ALL_SENSITIVE_INFO')
+ for tb in sorted([tb for tb in arcpy.ListTables("*") if tb in archive_tbs or tb.endswith("_IDW")]):
+ arcpy.AddMessage(f"\t\t\tTable: {tb}")
+ arcpy.management.Copy(rf"{project_gdb}\{tb}", rf"{archive_gdb}\{tb}")
+ tb_md = md.Metadata(rf"{project_gdb}\{tb}")
+ tb_md.exportMetadata(os.path.abspath(os.path.join(archive_folder, f"DisMAP_{dismap_tools.date_code(version)}", f"results/vector-tabular/metadata/{tb}.xml")), "ISO19139", "REMOVE_ALL_SENSITIVE_INFO")
+ del tb_md
+
+ del tb
+ del archive_tbs
+
+ archive_fcs = [
+ "Regions",
+ "Sample_Locations",
+ ]
+
+ for fc in sorted([fc for fc in arcpy.ListFeatureClasses("*") if any(fc.endswith(f"{f}") for f in archive_fcs)]):
+ arcpy.AddMessage(f"\t\t\tFeature Class: {fc}")
+ arcpy.management.Copy(rf"{project_gdb}\{fc}", rf"{archive_gdb}\{fc}")
+ fc_md = md.Metadata(rf"{project_gdb}\{fc}")
+ fc_md.exportMetadata(os.path.abspath(os.path.join(archive_folder, f"DisMAP_{dismap_tools.date_code(version)}", f"results/vector-tabular/metadata/{fc}.xml")), "ISO19139", "REMOVE_ALL_SENSITIVE_INFO")
+ del fc_md
+
+ del fc
+
+ del project_gdb, _archive_folder, archive_gdb
+ del version
+
+ # Delete Declared Varibales
+ # Delete Functions Parameters
+ del home_folder, versions, archive_folder
+ # Imports
+ del dismap_tools, md
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except:# noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == '__main__':
+ try:
+ home_folder = arcpy.GetParameterAsText(0)
+ versions = arcpy.GetParameterAsText(1)
+ archive_folder = arcpy.GetParameterAsText(2)
+
+ if not home_folder:
+ home_folder = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMap\\ArcGIS-Analysis-Python")
+ else:
+ arcpy.AddMessage(f"Home Folder: {os.path.basename(home_folder)}")
+
+ if not versions:
+ versions = ["February 1 2026"]
+ #versions = ["April 1 2023", "July 1 2024", "August 1 2025", "February 1 2026"]
+ #versions = ["July 1 2024", "August 1 2025",]
+ else:
+ arcpy.AddMessage(f"Versions: {', '.join(versions)}")
+
+ if not archive_folder:
+ archive_folder = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMap\\ArcGIS-Analysis-Python\\NCEI Archive")
+ else:
+ arcpy.AddMessage(f"Home Folder: {os.path.basename(home_folder)}")
+
+ result = main(home_folder=home_folder, versions=versions, archive_folder=archive_folder)
+
+ if result:
+ arcpy.SetParameterAsText(3, result)
+ else:
+ pass
+ del result
+
+ # Clean-up declared variables
+ del home_folder, versions, archive_folder
+
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ else:
+ pass
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/dismap_base_project_setup.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/dismap_base_project_setup.py
new file mode 100644
index 0000000..d0dce99
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/dismap_base_project_setup.py
@@ -0,0 +1,115 @@
+# -*- coding: utf-8 -*-
+# -------------------------------------------------------------------------------
+# Name: dismap.py
+# Purpose: Common DisMAP functions
+#
+# Author: john.f.kennedy
+#
+# Created: 12/01/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+# -------------------------------------------------------------------------------
+import os
+import sys # built-ins first
+
+import arcpy # third-parties second # noqa: F401
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def script_tool(base_project_folder="", base_project_folders=""):
+ try:
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ #arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+ arcpy.AddMessage(base_project_folder)
+
+ arcpy.env.workspace = base_project_folder
+
+ #for folder in arcpy.ListWorkspaces("*", "Folder"):
+ # arcpy.AddMessage(os.path.basename(folder))
+ # del folder
+
+ for project_folder in base_project_folders.split(";"):
+ project_folder_path = os.path.abspath(os.path.join(base_project_folder, f"{project_folder}"))
+ if not os.path.isdir(project_folder_path):
+ os.makedirs(project_folder_path)
+ else:
+ pass
+ del project_folder_path
+ del project_folder
+
+ # Set varaibales
+
+ # Declared Varaiables
+ # Imports
+ # Function Parameters
+ del base_project_folder, base_project_folders
+
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}\nCompleted: {strftime('%a %b %d %I:%M %p', localtime())}")
+ arcpy.AddMessage(u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time))))
+ arcpy.AddMessage(f"{'-' * 80}")
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except:# noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == '__main__':
+ try:
+ base_project_folder = arcpy.GetParameterAsText(0)
+ base_project_folders = arcpy.GetParameterAsText(1)
+
+ if not base_project_folder:
+ base_project_folder = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMap\ArcGIS-Analysis-Python"
+ else:
+ pass
+
+ if not base_project_folders:
+ base_project_folders = "Bathymetry;Dataset Shapefiles;Initial Data"
+ else:
+ pass
+
+ script_tool(base_project_folder, base_project_folders)
+ arcpy.SetParameterAsText(3, "Result")
+
+ del base_project_folder, base_project_folders
+
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ else:
+ pass
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/dismap_metadata_processing.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/dismap_metadata_processing.py
new file mode 100644
index 0000000..d7107b3
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/dismap_metadata_processing.py
@@ -0,0 +1,2547 @@
+"""
+Script documentation
+- Tool parameters are accessed using arcpy.GetParameter() or
+ arcpy.GetParameterAsText()
+- Update derived parameter values using arcpy.SetParameter() or
+ arcpy.SetParameterAsText()
+"""
+# built-ins first
+import os
+import sys
+import inspect
+# third-parties second
+import arcpy
+
+sys.path.append(os.path.dirname(__file__))
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def line_info(msg):
+ f = inspect.currentframe()
+ i = inspect.getframeinfo(f.f_back)
+ return f"Script: {os.path.basename(i.filename)}\n\tNear Line: {i.lineno}\n\tFunction: {i.function}\n\tMessage: {msg}"
+
+def print_path(path):
+ return f"{'\\'.join(path.split(os.sep)[:2])}\\...\\{'\\'.join(path.split(os.sep)[len(path.split(os.sep))-4:])}"
+
+def create_basic_template_xml_files(base_project_file="", project=""):
+ try:
+ # Import
+ from arcpy import metadata as md
+
+ from dismap_tools import dataset_title_dict, pretty_format_xml_file
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2)
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ base_project_folder = rf"{os.path.dirname(base_project_file)}"
+ base_project_file = rf"{base_project_folder}\DisMAP.aprx"
+ project_folder = rf"{base_project_folder}\{project}"
+ project_gdb = rf"{project_folder}\{project}.gdb"
+ metadata_folder = rf"{project_folder}\Template Metadata"
+ crfs_folder = rf"{project_folder}\CRFs"
+ scratch_folder = rf"{project_folder}\Scratch"
+
+ metadata_dictionary = dataset_title_dict(project_gdb)
+
+ workspaces = [project_gdb, crfs_folder]
+
+ for workspace in workspaces:
+
+ arcpy.env.workspace = workspace
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+
+ datasets = list()
+
+ walk = arcpy.da.Walk(workspace)
+
+ for dirpath, dirnames, filenames in walk:
+ for filename in filenames:
+ datasets.append(os.path.join(dirpath, filename))
+ del filename
+ del dirpath, dirnames, filenames
+ del walk
+
+ for dataset_path in sorted(datasets):
+ #print(dataset_path)
+ dataset_name = os.path.basename(dataset_path)
+
+ print(f"Dataset Name: {dataset_name}")
+
+ if "Datasets" == dataset_name:
+
+ print("\tDataset Table")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ datasets_table_template = rf"{metadata_folder}\datasets_table_template.xml"
+ dataset_md.saveAsXML(datasets_table_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(datasets_table_template)
+ del datasets_table_template
+
+ del dataset_md
+
+ elif "Species_Filter" == dataset_name:
+
+ print("\tSpecies Filter Table")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ species_filter_table_template = rf"{metadata_folder}\species_filter_table_template.xml"
+ dataset_md.saveAsXML(species_filter_table_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(species_filter_table_template)
+ del species_filter_table_template
+
+ del dataset_md
+
+ elif "Indicators" in dataset_name:
+
+ print("\tIndicators")
+
+ if dataset_name == "Indicators":
+ dataset_name = f"{dataset_name}_Table"
+ else:
+ pass
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ indicators_template = rf"{metadata_folder}\indicators_template.xml"
+ dataset_md.saveAsXML(indicators_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(indicators_template)
+ del indicators_template
+
+ del dataset_md
+
+ elif "LayerSpeciesYearImageName" in dataset_name:
+
+ print("\tLayer Species Year Image Name")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ layer_species_year_image_name_template = rf"{metadata_folder}\layer_species_year_image_name_template.xml"
+ dataset_md.saveAsXML(layer_species_year_image_name_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(layer_species_year_image_name_template)
+ del layer_species_year_image_name_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("Boundary"):
+
+ print("\tBoundary")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ boundary_template = rf"{metadata_folder}\boundary_template.xml"
+ dataset_md.saveAsXML(boundary_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(boundary_template)
+ del boundary_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("Extent_Points"):
+
+ print("\tExtent_Points")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ extent_points_template = rf"{metadata_folder}\extent_points_template.xml"
+ dataset_md.saveAsXML(extent_points_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(extent_points_template)
+ del extent_points_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("Fishnet"):
+
+ print("\tFishnet")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ fishnet_template = rf"{metadata_folder}\fishnet_template.xml"
+ dataset_md.saveAsXML(fishnet_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(fishnet_template)
+ del fishnet_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("Lat_Long"):
+
+ print("\tLat_Long")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ lat_long_template = rf"{metadata_folder}\lat_long_template.xml"
+ dataset_md.saveAsXML(lat_long_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(lat_long_template)
+ del lat_long_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("Region"):
+
+ print("\tRegion")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ region_template = rf"{metadata_folder}\region_template.xml"
+ dataset_md.saveAsXML(region_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(region_template)
+ del region_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("Sample_Locations"):
+
+ print("\tSample_Locations")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ sample_locations_template = rf"{metadata_folder}\sample_locations_template.xml"
+ dataset_md.saveAsXML(sample_locations_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(sample_locations_template)
+ del sample_locations_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("GRID_Points"):
+
+ print("\tGRID_Points")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ grid_points_template = rf"{metadata_folder}\grid_points_template.xml"
+ dataset_md.saveAsXML(grid_points_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(grid_points_template)
+ del grid_points_template
+
+ del dataset_md
+
+ elif "DisMAP_Regions" == dataset_name:
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ dismap_regions_template = rf"{metadata_folder}\dismap_regions_template.xml"
+ dataset_md.saveAsXML(dismap_regions_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(dismap_regions_template)
+ del dismap_regions_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("Bathymetry"):
+
+ print("\tBathymetry")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ bathymetry_template = rf"{metadata_folder}\bathymetry_template.xml"
+ dataset_md.saveAsXML(bathymetry_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(bathymetry_template)
+ del bathymetry_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("Latitude"):
+
+ print("\tLatitude")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ latitude_template = rf"{metadata_folder}\latitude_template.xml"
+ dataset_md.saveAsXML(latitude_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(latitude_template)
+ del latitude_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("Longitude"):
+
+ print("\tLongitude")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ longitude_template = rf"{metadata_folder}\longitude_template.xml"
+ dataset_md.saveAsXML(longitude_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(longitude_template)
+ del longitude_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("Raster_Mask"):
+
+ print("\tRaster_Mask")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ raster_mask_template = rf"{metadata_folder}\raster_mask_template.xml"
+ dataset_md.saveAsXML(raster_mask_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(raster_mask_template)
+ del raster_mask_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("Mosaic"):
+
+ print("\tMosaic")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ mosaic_template = rf"{metadata_folder}\mosaic_template.xml"
+ dataset_md.saveAsXML(mosaic_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(mosaic_template)
+ del mosaic_template
+
+ del dataset_md
+
+ elif dataset_name.endswith(".crf"):
+
+ print("\tCRF")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ crf_template = rf"{metadata_folder}\crf_template.xml"
+ dataset_md.saveAsXML(crf_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(crf_template)
+ del crf_template
+
+ del dataset_md
+
+ else:
+ print("\tRegion Table")
+
+ if dataset_name.endswith("IDW"):
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[f"{dataset_name}"]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[f"{dataset_name}"]["Tags"]
+ dataset_md.summary = metadata_dictionary[f"{dataset_name}"]["Summary"]
+ dataset_md.description = metadata_dictionary[f"{dataset_name}"]["Description"]
+ dataset_md.credits = metadata_dictionary[f"{dataset_name}"]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[f"{dataset_name}"]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ idw_region_table_template = rf"{metadata_folder}\idw_region_table_template.xml"
+ dataset_md.saveAsXML(idw_region_table_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(idw_region_table_template)
+ del idw_region_table_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("GLMME"):
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[f"{dataset_name}"]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[f"{dataset_name}"]["Tags"]
+ dataset_md.summary = metadata_dictionary[f"{dataset_name}"]["Summary"]
+ dataset_md.description = metadata_dictionary[f"{dataset_name}"]["Description"]
+ dataset_md.credits = metadata_dictionary[f"{dataset_name}"]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[f"{dataset_name}"]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ glmme_region_table_template = rf"{metadata_folder}\glmme_region_table_template.xml"
+ dataset_md.saveAsXML(glmme_region_table_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(glmme_region_table_template)
+ del glmme_region_table_template
+
+ del dataset_md
+
+ else:
+ pass
+
+ del dataset_name, dataset_path
+
+ del workspace
+
+
+ del datasets
+
+ # Declared Variables set in function
+ del project_gdb, base_project_folder, metadata_folder
+ del project_folder, scratch_folder, crfs_folder
+ del metadata_dictionary, workspaces
+
+ # Imports
+ del dataset_title_dict, pretty_format_xml_file
+ del md
+
+ # Function Parameters
+ del base_project_file, project
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+
+def import_basic_template_xml_files(base_project_file="", project=""):
+ try:
+ # Import
+ from arcpy import metadata as md
+
+ from dismap_tools import dataset_title_dict, pretty_format_xml_file, unique_years
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2)
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ base_project_folder = rf"{os.path.dirname(base_project_file)}"
+ project_folder = rf"{base_project_folder}\{project}"
+ project_gdb = rf"{project_folder}\{project}.gdb"
+ current_md_folder = rf"{project_folder}\Current Metadata"
+ inport_md_folder = rf"{project_folder}\InPort Metadata"
+ crfs_folder = rf"{project_folder}\CRFs"
+ scratch_folder = rf"{project_folder}\Scratch"
+
+ #print("Creating the Metadata Dictionary. Please wait!!")
+ metadata_dictionary = dataset_title_dict(project_gdb)
+ #print("Creating the Metadata Dictionary. Completed")
+
+ workspaces = [project_gdb, crfs_folder]
+ #workspaces = [crfs_folder]
+
+ for workspace in workspaces:
+
+ arcpy.env.workspace = workspace
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+
+ datasets = list()
+
+ walk = arcpy.da.Walk(workspace)
+
+ for dirpath, dirnames, filenames in walk:
+ for filename in filenames:
+ datasets.append(os.path.join(dirpath, filename))
+ del filename
+ del dirpath, dirnames, filenames
+ del walk
+
+ for dataset_path in sorted(datasets):
+ #print(dataset_path)
+ dataset_name = os.path.basename(dataset_path)
+
+ print(f"Dataset Name: {dataset_name}")
+
+ if "Datasets" == dataset_name:
+
+ print("\tDataset Table")
+
+ datasets_table_template = rf"{current_md_folder}\Table\datasets_table_template.xml"
+ template_md = md.Metadata(datasets_table_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ #dataset_md.importMetadata(datasets_table_template)
+ dataset_md.save()
+ #dataset_md.synchronize("SELECTIVE")
+
+ del empty_md, template_md, datasets_table_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Table\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif "Species_Filter" == dataset_name:
+
+ print("\tSpecies Filter Table")
+
+ species_filter_table_template = rf"{current_md_folder}\Table\species_filter_table_template.xml"
+ template_md = md.Metadata(species_filter_table_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, species_filter_table_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Table\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif "Indicators" in dataset_name:
+
+ print("\tIndicators")
+
+ if dataset_name == "Indicators":
+ indicators_template = rf"{current_md_folder}\Table\indicators_template.xml"
+ else:
+ indicators_template = rf"{current_md_folder}\Table\region_indicators_template.xml"
+
+ template_md = md.Metadata(indicators_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, indicators_template
+
+ # Max-Min Year range table
+ years_md = unique_years(dataset_path)
+ _tags = f", {min(years_md)} to {max(years_md)}"
+ del years_md
+
+ #print(metadata_dictionary[dataset_name]["Tags"])
+ #print(_tags)
+
+ if dataset_name == "Indicators":
+ dataset_name = f"{dataset_name}_Table"
+ else:
+ pass
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"] + _tags
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Table\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md, _tags
+
+ elif "LayerSpeciesYearImageName" in dataset_name:
+
+ print("\tLayer Species Year Image Name")
+
+ layer_species_year_image_name_template = rf"{current_md_folder}\Table\layer_species_year_image_name_template.xml"
+ template_md = md.Metadata(layer_species_year_image_name_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, layer_species_year_image_name_template
+
+ # Max-Min Year range table
+ years_md = unique_years(dataset_path)
+ _tags = f", {min(years_md)} to {max(years_md)}"
+ del years_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"] + _tags
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Table\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md, _tags
+
+ elif dataset_name.endswith("Boundary"):
+
+ print("\tBoundary")
+
+ boundary_template = rf"{current_md_folder}\Boundary\boundary_template.xml"
+ template_md = md.Metadata(boundary_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, boundary_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Boundary\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Boundary\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif dataset_name.endswith("Extent_Points"):
+
+ print("\tExtent_Points")
+
+ extent_points_template = rf"{current_md_folder}\Extent_Points\extent_points_template.xml"
+ template_md = md.Metadata(extent_points_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, extent_points_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Extent_Points\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Extent_Points\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif dataset_name.endswith("Fishnet"):
+
+ print("\tFishnet")
+
+ fishnet_template = rf"{current_md_folder}\Fishnet\fishnet_template.xml"
+ template_md = md.Metadata(fishnet_template)
+
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, fishnet_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Fishnet\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Fishnet\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif dataset_name.endswith("Lat_Long"):
+
+ print("\tLat_Long")
+
+ lat_long_template = rf"{current_md_folder}\Lat_Long\lat_long_template.xml"
+ template_md = md.Metadata(lat_long_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, lat_long_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Lat_Long\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Lat_Long\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif dataset_name.endswith("Region"):
+
+ print("\tRegion")
+
+ region_template = rf"{current_md_folder}\Region\region_template.xml"
+ template_md = md.Metadata(region_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, region_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Region\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Region\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif dataset_name.endswith("Sample_Locations"):
+
+ print("\tSample_Locations")
+
+ sample_locations_template = rf"{current_md_folder}\Sample_Locations\sample_locations_template.xml"
+ template_md = md.Metadata(sample_locations_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, sample_locations_template
+
+ # Max-Min Year range table
+ years_md = unique_years(dataset_path)
+ _tags = f", {min(years_md)} to {max(years_md)}"
+ del years_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"] + _tags
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Sample_Locations\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Sample_Locations\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md, _tags
+
+ elif dataset_name.endswith("GRID_Points"):
+
+ print("\tGRID_Points")
+
+ grid_points_template = rf"{current_md_folder}\GRID_Points\grid_points_template.xml"
+ template_md = md.Metadata(grid_points_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, grid_points_template
+
+ # Max-Min Year range table
+ years_md = unique_years(dataset_path)
+ _tags = f", {min(years_md)} to {max(years_md)}"
+ del years_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"] + _tags
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\GRID_Points\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\GRID_Points\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md, _tags
+
+ elif "DisMAP_Regions" == dataset_name:
+
+ print("\tDisMAP_Regions")
+
+ dismap_regions_template = rf"{current_md_folder}\Region\dismap_regions_template.xml"
+ template_md = md.Metadata(dismap_regions_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, dismap_regions_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Region\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Region\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif dataset_name.endswith("Bathymetry"):
+
+ print("\tBathymetry")
+
+ bathymetry_template = rf"{current_md_folder}\Bathymetry\bathymetry_template.xml"
+ template_md = md.Metadata(bathymetry_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, bathymetry_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Bathymetry\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Bathymetry\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif dataset_name.endswith("Latitude"):
+
+ print("\tLatitude")
+
+ latitude_template = rf"{current_md_folder}\Latitude\latitude_template.xml"
+ template_md = md.Metadata(latitude_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, latitude_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Latitude\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Latitude\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif dataset_name.endswith("Longitude"):
+
+ print("\tLongitude")
+
+ longitude_template = rf"{current_md_folder}\Longitude\longitude_template.xml"
+ template_md = md.Metadata(longitude_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, longitude_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Longitude\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Longitude\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif dataset_name.endswith("Raster_Mask"):
+
+ print("\tRaster_Mask")
+
+ raster_mask_template = rf"{current_md_folder}\Raster_Mask\raster_mask_template.xml"
+ template_md = md.Metadata(raster_mask_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, raster_mask_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Raster_Mask\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Raster_Mask\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif dataset_name.endswith("Mosaic"):
+
+ print("\tMosaic")
+
+ mosaic_template = rf"{current_md_folder}\Mosaic\mosaic_template.xml"
+ template_md = md.Metadata(mosaic_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, mosaic_template
+
+ # Max-Min Year range table
+ years_md = unique_years(dataset_path)
+ _tags = f", {min(years_md)} to {max(years_md)}"
+ del years_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"] + _tags
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Mosaic\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Mosaic\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md, _tags
+
+ elif dataset_name.endswith(".crf"):
+
+ print("\tCRF")
+ #print(dataset_name)
+ #print(dataset_path)
+ #dataset_path = dataset_path.replace(crfs_folder, project_gdb).replace(".crf", "_Mosaic")
+ #print(dataset_path)
+
+ crf_template = rf"{current_md_folder}\CRF\crf_template.xml"
+ template_md = md.Metadata(crf_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, crf_template
+
+ # Max-Min Year range table
+ years_md = unique_years(dataset_path.replace(crfs_folder, project_gdb).replace(".crf", "_Mosaic"))
+ _tags = f", {min(years_md)} to {max(years_md)}"
+ del years_md
+
+ dataset_md.title = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Tags"] + _tags
+ dataset_md.summary = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\CRF\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\CRF\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md, _tags
+
+ else:
+ print("\tRegion Table")
+
+ if dataset_name.endswith("IDW"):
+
+ idw_region_table_template = rf"{current_md_folder}\Table\idw_region_table_template.xml"
+ template_md = md.Metadata(idw_region_table_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, idw_region_table_template
+
+ # Max-Min Year range table
+ years_md = unique_years(dataset_path)
+ _tags = f", {min(years_md)} to {max(years_md)}"
+ del years_md
+
+ dataset_md.title = metadata_dictionary[f"{dataset_name}"]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[f"{dataset_name}"]["Tags"] + _tags
+ dataset_md.summary = metadata_dictionary[f"{dataset_name}"]["Summary"]
+ dataset_md.description = metadata_dictionary[f"{dataset_name}"]["Description"]
+ dataset_md.credits = metadata_dictionary[f"{dataset_name}"]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[f"{dataset_name}"]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Table\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md, _tags
+
+ elif dataset_name.endswith("GLMME"):
+
+ glmme_region_table_template = rf"{current_md_folder}\Table\glmme_region_table_template.xml"
+ template_md = md.Metadata(glmme_region_table_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, glmme_region_table_template
+
+ # Max-Min Year range table
+ years_md = unique_years(dataset_path)
+ _tags = f", {min(years_md)} to {max(years_md)}"
+ del years_md
+
+ dataset_md.title = metadata_dictionary[f"{dataset_name}"]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[f"{dataset_name}"]["Tags"] + _tags
+ dataset_md.summary = metadata_dictionary[f"{dataset_name}"]["Summary"]
+ dataset_md.description = metadata_dictionary[f"{dataset_name}"]["Description"]
+ dataset_md.credits = metadata_dictionary[f"{dataset_name}"]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[f"{dataset_name}"]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Table\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md, _tags
+
+ else:
+ pass
+
+ del dataset_name, dataset_path
+
+ del workspace
+
+ del datasets
+
+ # Declared Variables set in function
+ del project_gdb, base_project_folder, current_md_folder, inport_md_folder
+ del project_folder, scratch_folder, crfs_folder
+ del metadata_dictionary, workspaces
+
+ # Imports
+ del dataset_title_dict, pretty_format_xml_file, unique_years
+ del md
+
+ # Function Parameters
+ del base_project_file, project
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+
+def create_thumbnails(base_project_file="", project=""):
+ try:
+ # Import
+ from arcpy import metadata as md
+
+ from dismap_tools import pretty_format_xml_file
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2)
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ base_project_folder = rf"{os.path.dirname(base_project_file)}"
+ base_project_file = rf"{base_project_folder}\DisMAP.aprx"
+ project_folder = rf"{base_project_folder}\{project}"
+ project_gdb = rf"{project_folder}\{project}.gdb"
+ metadata_folder = rf"{project_folder}\Export Metadata"
+ crfs_folder = rf"{project_folder}\CRFs"
+ scratch_folder = rf"{project_folder}\Scratch"
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+
+ aprx = arcpy.mp.ArcGISProject(base_project_file)
+ home_folder = aprx.homeFolder
+
+ workspaces = [project_gdb, crfs_folder]
+
+ for workspace in workspaces:
+
+ arcpy.env.workspace = workspace
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+
+ datasets = list()
+
+ walk = arcpy.da.Walk(workspace)
+
+ for dirpath, dirnames, filenames in walk:
+ for filename in filenames:
+ datasets.append(os.path.join(dirpath, filename))
+ del filename
+ del dirpath, dirnames, filenames
+ del walk
+
+ for dataset_path in sorted(datasets):
+ #print(dataset_path)
+ dataset_name = os.path.basename(dataset_path)
+
+ print(f"Dataset Name: {dataset_name}")
+
+ if "Datasets" == dataset_name:
+
+ print("\tDataset Table")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif "Species_Filter" == dataset_name:
+
+ print("\tSpecies Filter Table")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif "Indicators" in dataset_name:
+
+ print("\tIndicators")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif "LayerSpeciesYearImageName" in dataset_name:
+
+ print("\tLayer Species Year Image Name")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("Boundary"):
+
+ print("\tBoundary")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Boundary\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("Extent_Points"):
+
+ print("\tExtent_Points")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Extent_Points\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("Fishnet"):
+
+ print("\tFishnet")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Fishnet\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("Lat_Long"):
+
+ print("\tLat_Long")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Lat_Long\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("Region"):
+
+ print("\tRegion")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Region\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("Sample_Locations"):
+
+ print("\tSample_Locations")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Sample_Locations\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("GRID_Points"):
+
+ print("\tGRID_Points")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\GRID_Points\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif "DisMAP_Regions" == dataset_name:
+
+ print("\tDisMAP_Regions")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Region\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("Bathymetry"):
+
+ print("\tBathymetry")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Bathymetry\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("Latitude"):
+
+ print("\tLatitude")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Latitude\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("Longitude"):
+
+ print("\tLongitude")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Longitude\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("Raster_Mask"):
+
+ print("\tRaster_Mask")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Raster_Mask\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("Mosaic"):
+
+ print("\tMosaic")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Mosaic\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith(".crf"):
+
+ print("\tCRF")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\CRF\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ else:
+ pass
+ print("\tRegion Table")
+
+ if dataset_name.endswith("IDW"):
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("GLMME"):
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ else:
+ pass
+
+ del dataset_name, dataset_path
+
+ del workspace, datasets
+
+ del workspaces
+
+ # Declared Variables set in function for aprx
+ del home_folder
+ # Save aprx one more time and then delete
+ aprx.save()
+ del aprx
+
+ # Declared Variables set in function
+ del project_gdb, base_project_folder, metadata_folder, crfs_folder
+ del project_folder, scratch_folder
+
+ # Imports
+ del pretty_format_xml_file
+ del md
+
+ # Function Parameters
+ del base_project_file, project
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+
+def export_to_inport_xml_files(base_project_file="", project=""):
+ try:
+ if not base_project_file or not project: raise SystemExit("parameters are missing")
+
+ # Import
+ from arcpy import metadata as md
+
+ from dismap_tools import pretty_format_xml_file
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2)
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ base_project_folder = rf"{os.path.dirname(base_project_file)}"
+ project_folder = rf"{base_project_folder}\{project}"
+ project_gdb = rf"{project_folder}\{project}.gdb"
+ metadata_folder = rf"{project_folder}\InPort Metadata"
+ crfs_folder = rf"{project_folder}\CRFs"
+ scratch_folder = rf"{project_folder}\Scratch"
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+
+ datasets = [rf"{project_gdb}\Species_Filter", rf"{project_gdb}\Indicators", os.path.join(project_gdb, "DisMAP_Regions"), rf"{project_gdb}\GMEX_IDW_Sample_Locations", rf"{project_gdb}\GMEX_IDW_Mosaic", rf"{crfs_folder}\GMEX_IDW.crf"]
+
+ for dataset_path in sorted(datasets):
+ print(dataset_path)
+
+ dataset_name = os.path.basename(dataset_path)
+
+ print(f"Dataset Name: {dataset_name}")
+
+ target_file_path = rf"{metadata_folder}\{dataset_name}.xml"
+ custom_xslt_path = rf"{metadata_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ del dataset_md
+
+ try:
+ pretty_format_xml_file(target_file_path)
+ except Exception:
+ raise Exception
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_name, dataset_path
+
+ del datasets
+
+ # Declared Variables set in function
+ del project_gdb, base_project_folder, metadata_folder
+ del project_folder, scratch_folder, crfs_folder
+
+ # Imports
+ del pretty_format_xml_file
+ del md
+
+ # Function Parameters
+ del base_project_file, project
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+
+def create_maps(base_project_file="", project="", dataset=""):
+ try:
+ # Import
+ from arcpy import metadata as md
+
+ from dismap_tools import pretty_format_xml_file
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2)
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ base_project_folder = rf"{os.path.dirname(base_project_file)}"
+ base_project_file = rf"{base_project_folder}\DisMAP.aprx"
+ project_folder = rf"{base_project_folder}\{project}"
+ project_gdb = rf"{project_folder}\{project}.gdb"
+ metadata_folder = rf"{project_folder}\Export Metadata"
+ crfs_folder = rf"{project_folder}\CRFs"
+ scratch_folder = rf"{project_folder}\Scratch"
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+
+ aprx = arcpy.mp.ArcGISProject(base_project_file)
+
+ dataset_name = os.path.basename(dataset)
+
+ print(f"Dataset Name: {dataset_name}")
+
+ if dataset_name not in [cm.name for cm in aprx.listMaps()]:
+ print(f"Creating Map: {dataset_name}")
+ aprx.createMap(f"{dataset_name}", "Map")
+ aprx.save()
+ else:
+ pass
+
+ current_map = aprx.listMaps(f"{dataset_name}")[0]
+ print(f"Current Map: {current_map.name}")
+
+ if dataset_name not in [lyr.name for lyr in current_map.listLayers(f"{dataset_name}")]:
+ print(f"Adding {dataset_name} to Map")
+
+ map_layer = arcpy.management.MakeFeatureLayer(dataset, f"{dataset_name}")
+
+ #arcpy.management.Delete(rf"{project_folder}\Layers\{dataset_name}.lyrx")
+ #os.remove(rf"{project_folder}\Layers\{dataset_name}.lyrx")
+
+ map_layer_file = arcpy.management.SaveToLayerFile(map_layer, rf"{project_folder}\Layers\{dataset_name}.lyrx")
+ del map_layer_file
+
+ map_layer_file = arcpy.mp.LayerFile(rf"{project_folder}\Layers\{dataset_name}.lyrx")
+
+ arcpy.management.Delete(map_layer)
+ del map_layer
+
+ current_map.addLayer(map_layer_file)
+ del map_layer_file
+
+ aprx.save()
+ else:
+ pass
+
+ #aprx_basemaps = aprx.listBasemaps()
+ #basemap = 'GEBCO Basemap/Contours (NOAA NCEI Visualization)'
+ basemap = "Terrain with Labels"
+
+ current_map.addBasemap(basemap)
+ del basemap
+
+ # Set Reference Scale
+ current_map.referenceScale = 50000000
+
+ # Clear Selection
+ current_map.clearSelection()
+
+ current_map_cim = current_map.getDefinition('V3')
+ current_map_cim.enableWraparound = True
+ current_map.setDefinition(current_map_cim)
+
+ # Return the layer's CIM definition
+ cim_lyr = lyr.getDefinition('V3')
+
+ # Modify the color, width and dash template for the SolidStroke layer
+ symLvl1 = cim_lyr.renderer.symbol.symbol.symbolLayers[0]
+ symLvl1.color.values = [0, 0, 0, 100]
+ symLvl1.width = 1
+
+ # Push the changes back to the layer object
+ lyr.setDefinition(cim_lyr)
+ del symLvl1, cim_lyr
+
+ aprx.save()
+
+ height = arcpy.Describe(dataset).extent.YMax - arcpy.Describe(dataset).extent.YMin
+ width = arcpy.Describe(dataset).extent.XMax - arcpy.Describe(dataset).extent.XMin
+
+ # map_width, map_height
+ map_width, map_height = 8.5, 11
+
+ if height > width:
+ page_height = map_height; page_width = map_width
+ elif height < width:
+ page_height = map_width; page_width = map_height
+ else:
+ page_width = map_width; page_height = map_height
+
+ del map_width, map_height
+ del height, width
+
+ if dataset_name not in [cl.name for cl in aprx.listLayouts()]:
+ print(f"Creating Layout: {dataset_name}")
+ aprx.createLayout(page_width, page_height, "INCH", f"{dataset_name}")
+ aprx.save()
+ else:
+ print(f"Layout: {dataset_name} exists")
+
+ #Set the default map camera to the extent of the park boundary before opening the new view
+ #default camera only affects newly opened views
+ lyr = current_map.listLayers(f"{dataset_name}")[-1]
+
+ #
+ arcpy.management.SelectLayerByAttribute(lyr, 'NEW_SELECTION', "DatasetCode in ('ENBS', 'HI', 'NEUS_SPR')")
+
+ mv = current_map.openView()
+ mv.panToExtent(mv.getLayerExtent(lyr, True, True))
+ mv.zoomToAllLayers()
+ del mv
+
+ arcpy.management.SelectLayerByAttribute(lyr, 'CLEAR_SELECTION')
+
+ av = aprx.activeView
+ av.exportToPNG(rf"{project_folder}\Layers\{dataset_name}.png", width=288, height=192, resolution = 96, color_mode="24-BIT_TRUE_COLOR", embed_color_profile=True)
+ av.exportToJPEG(rf"{project_folder}\Layers\{dataset_name}.jpg", width=288, height=192, resolution = 96, jpeg_color_mode="24-BIT_TRUE_COLOR", embed_color_profile=True)
+ del av
+
+ #print(current_map.referenceScale)
+
+ #export the newly opened active view to PDF, then delete the new map
+ #mv = aprx.activeView
+ #mv.exportToPDF(r"C:\Temp\RangerStations.pdf", width=700, height=500, resolution=96)
+ #aprx.deleteItem(current_map)
+
+ #mv = aprx.activeView
+ #mv = current_map.defaultView
+ #mv.zoomToAllLayers()
+ #print(mv.camera.getExtent())
+ #arcpy.management.Delete(rf"{project_folder}\Layers\{dataset_name}.png")
+ #arcpy.management.Delete(rf"{project_folder}\Layers\{dataset_name}.jpg")
+
+ #os.remove(rf"{project_folder}\Layers\{dataset_name}.png")
+ #os.remove(rf"{project_folder}\Layers\{dataset_name}.jpg")
+
+
+ #mv.exportToPNG(rf"{project_folder}\Layers\{dataset_name}.png", width=288, height=192, resolution = 96, color_mode="24-BIT_TRUE_COLOR", embed_color_profile=True)
+ #mv.exportToJPEG(rf"{project_folder}\Layers\{dataset_name}.jpg", width=288, height=192, resolution = 96, jpeg_color_mode="24-BIT_TRUE_COLOR", embed_color_profile=True)
+ #del mv
+
+ #Export the resulting imported layout and changes to JPEG
+ #print(f"Exporting '{current_layout.name}'")
+ #current_map.exportToJPEG(rf"{project_folder}\Layouts\{current_layout.name}.jpg", page_width, page_height)
+ #current_map.exportToPNG(rf"{project_folder}\Layouts\{current_layout.name}.png", page_width, page_height)
+
+ #fc_md = md.Metadata(dataset)
+ #fc_md.thumbnailUri = rf"{project_folder}\Layouts\{dataset_name}.png"
+ #fc_md.thumbnailUri = rf"{project_folder}\Layouts\{dataset_name}.jpg"
+ #fc_md.save()
+ #del fc_md
+
+ aprx.save()
+
+
+ # # from arcpy import metadata as md
+ # #
+ # # fc_md = md.Metadata(dataset)
+ # # fc_md.thumbnailUri = rf"{project_folder}\Layers\{dataset_name}.png"
+ # # fc_md.save()
+ # # del fc_md
+ # # del md
+
+## aprx.save()
+##
+## current_layout = [cl for cl in aprx.listLayouts() if cl.name == dataset_name][0]
+## print(f"Current Layout: {current_layout.name}")
+##
+## current_layout.openView()
+##
+## # Remove all map frames
+## for mf in current_layout.listElements("MapFrame_Element"): current_layout.deleteElement(mf); del mf
+##
+## # print(f'Layout Name: {current_layout.name}')
+## # print(f' Width x height: {current_layout.pageWidth} x {current_layout.pageHeight} units are {current_layout.pageUnits}')
+## # print(f' MapFrame count: {str(len(current_layout.listElements("MapFrame_Element")))}')
+## # for mf in current_layout.listElements("MapFrame_Element"):
+## # if len(current_layout.listElements("MapFrame_Element")) > 0:
+## # print(f' MapFrame name: {mf.name}')
+## # print(f' Total element count: {str(len(current_layout.listElements()))} \n')
+##
+##
+## print(f"Create a new map frame using a point geometry")
+## #Create a new map frame using a point geometry
+## #mf1 = current_layout.createMapFrame(arcpy.Point(0.01,0.01), current_map, 'New MF - Point')
+## mf1 = current_layout.createMapFrame(arcpy.Point(0.0,0.0), current_map, 'New MF - Point')
+## #mf1.elementWidth = 10
+## #mf1.elementHeight = 7.5
+## #mf1.elementWidth = page_width - 0.01
+## #mf1.elementHeight = page_height - 0.01
+## mf1.elementWidth = page_width
+## mf1.elementHeight = page_height
+
+## lyr = current_map.listLayers(f"{dataset_name}")[0]
+##
+## #Zoom to ALL selected features and export to PDF
+## #arcpy.SelectLayerByAttribute_management(lyr, 'NEW_SELECTION')
+## #mf1.zoomToAllLayers(True)
+## #arcpy.SelectLayerByAttribute_management(lyr, 'CLEAR_SELECTION')
+##
+## #Set the map frame extent to the extent of a layer
+## #mf1.camera.setExtent(mf1.getLayerExtent(lyr, False, True))
+## #mf1.camera.scale = mf1.camera.scale * 1.1 #add a slight buffer
+##
+## del lyr
+
+## print(f"Create a new bookmark set to the map frame's default extent")
+## #Create a new bookmark set to the map frame's default extent
+## bkmk = mf1.createBookmark('Default Extent', "The map's default extent")
+## bkmk.updateThumbnail()
+## del mf1
+## del bkmk
+
+ # Create point text element using a system style item
+ # txtStyleItem = aprx.listStyleItems('ArcGIS 2D', 'TEXT', 'Title (Serif)')[0]
+ # ptTxt = aprx.createTextElement(current_layout, arcpy.Point(5.5, 4.25), 'POINT', f'{dataset_name}', 10, style_item=txtStyleItem)
+ # del txtStyleItem
+
+ # Change the anchor position and reposition the text to center
+ # ptTxt.setAnchor('Center_Point')
+ # ptTxt.elementPositionX = page_width / 2.0
+ # ptTxt.elementPositionY = page_height - 0.25
+ # del ptTxt
+
+ # print(f"Using CIM to update border")
+ # current_layout_cim = current_layout.getDefinition('V3')
+ # for elm in current_layout_cim.elements:
+ # if type(elm).__name__ == 'CIMMapFrame':
+ # if elm.graphicFrame.borderSymbol.symbol.symbolLayers:
+ # sym = elm.graphicFrame.borderSymbol.symbol.symbolLayers[0]
+ # sym.width = 5
+ # sym.color.values = [255, 0, 0, 100]
+ # else:
+ # arcpy.AddWarning(elm.name + ' has NO symbol layers')
+ # current_layout.setDefinition(current_layout_cim)
+ # del current_layout_cim, elm, sym
+
+## ExportLayout = True
+## if ExportLayout:
+## #Export the resulting imported layout and changes to JPEG
+## print(f"Exporting '{current_layout.name}'")
+## current_layout.exportToJPEG(rf"{project_folder}\Layouts\{current_layout.name}.jpg")
+## current_layout.exportToPNG(rf"{project_folder}\Layouts\{current_layout.name}.png")
+## del ExportLayout
+
+## #Export the resulting imported layout and changes to JPEG
+## print(f"Exporting '{current_layout.name}'")
+## current_map.exportToJPEG(rf"{project_folder}\Layouts\{current_layout.name}.jpg", page_width, page_height)
+## current_map.exportToPNG(rf"{project_folder}\Layouts\{current_layout.name}.png", page_width, page_height)
+##
+## fc_md = md.Metadata(dataset)
+## fc_md.thumbnailUri = rf"{project_folder}\Layouts\{current_layout.name}.png"
+## #fc_md.thumbnailUri = rf"{project_folder}\Layouts\{current_layout.name}.jpg"
+## fc_md.save()
+## del fc_md
+##
+## aprx.save()
+
+ # aprx.deleteItem(current_map)
+ #aprx.deleteItem(current_layout)
+
+ del current_map
+ #, current_layout
+ #del page_width, page_height
+ del dataset_name, dataset
+
+ aprx.save()
+
+ print("\nCurrent Maps & Layouts")
+
+ current_maps = aprx.listMaps()
+ #current_layouts = aprx.listLayouts()
+
+ if current_maps:
+ print("\nCurrent Maps\n")
+ for current_map in current_maps:
+ print(f"\tProject Map: {current_map.name}")
+ del current_map
+ else:
+ arcpy.AddWarning("No maps in Project")
+
+## if current_layouts:
+## print(f"\nCurrent Layouts\n")
+## for current_layout in current_layouts:
+## print(f"\tProject Layout: {current_layout.name}")
+## del current_layout
+## else:
+## arcpy.AddWarning("No layouts in Project")
+
+ #del current_layouts
+ del current_maps
+
+ # Declared Variables set in function for aprx
+
+ # Save aprx one more time and then delete
+ aprx.save()
+ del aprx
+
+ # Declared Variables set in function
+ del project_gdb, base_project_folder, metadata_folder, crfs_folder
+ del project_folder, scratch_folder
+
+ # Imports
+ del pretty_format_xml_file
+ del md
+
+ # Function Parameters
+ del base_project_file, project
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+
+def script_tool(project_folder=""):
+ """Script code goes below"""
+ try:
+ from lxml import etree
+ from arcpy import metadata as md
+ from io import StringIO
+ import dismap_tools
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+ # Imports
+ #from dev_import_datasets_species_filter_csv_data import worker
+ # Set basic arcpy.env variables
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+## Backup = False
+## if Backup:
+## dismap.backup_gdb(project_gdb)
+## del Backup
+##
+## # Imports
+## del dismap
+## # Function parameters
+##
+## results = []
+##
+## try:
+##
+## CreateBasicTemplateXMLFiles = False
+## if CreateBasicTemplateXMLFiles:
+## result = create_basic_template_xml_files(base_project_file, project)
+## results.extend(result); del result
+## del CreateBasicTemplateXMLFiles
+##
+## ImportBasicTemplateXmlFiles = False
+## if ImportBasicTemplateXmlFiles:
+## result = import_basic_template_xml_files(base_project_file, project)
+## results.extend(result); del result
+## del ImportBasicTemplateXmlFiles
+##
+## CreateThumbnails = False
+## if CreateThumbnails:
+## result = create_thumbnails(base_project_file, project)
+## results.extend(result); del result
+## del CreateThumbnails
+##
+## CreateMaps = False
+## if CreateMaps:
+## result = create_maps(base_project_file, project, dataset=os.path.join(project_gdb, "DisMAP_Regions"))
+## results.extend(result); del result
+## del CreateMaps
+##
+## ExportToInportXmlFiles = False
+## if ExportToInportXmlFiles:
+## result = export_to_inport_xml_files(base_project_file, project)
+## results.extend(result); del result
+## del ExportToInportXmlFiles
+##
+## except Exception as e:
+## arcpy.AddError(str(e))
+##
+## # Variable created in function
+## del project_gdb
+## # Function parameters
+## del base_project_file, project
+
+
+ # Imports
+ del etree, md, StringIO, dismap_tools
+ # Function Parameters
+ del project_folder
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}\nCompleted: {strftime('%a %b %d %I:%M %p', localtime())}")
+ arcpy.AddMessage(u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time))))
+ arcpy.AddMessage(f"{'-' * 80}")
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722 # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+
+if __name__ == '__main__':
+ try:
+
+ project_folder = arcpy.GetParameterAsText(0)
+ if not project_folder:
+ project_folder = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026")
+ else:
+ pass
+
+ script_tool(project_folder)
+
+ arcpy.SetParameterAsText(1, "Result")
+
+ del project_folder
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/dismap_project_setup.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/dismap_project_setup.py
new file mode 100644
index 0000000..787bf7d
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/dismap_project_setup.py
@@ -0,0 +1,146 @@
+"""
+Script documentation
+- Tool parameters are accessed using arcpy.GetParameter() or
+ arcpy.GetParameterAsText()
+- Update derived parameter values using arcpy.SetParameter() or
+ arcpy.SetParameterAsText()
+"""
+import os
+import arcpy
+import traceback
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def script_tool(new_project_folder, project_folders):
+ """Script code goes below"""
+ try:
+ arcpy.env.overwriteOutput = True
+ aprx = arcpy.mp.ArcGISProject("CURRENT")
+ aprx.save()
+ home_folder = aprx.homeFolder
+ if not arcpy.Exists(rf"{home_folder}\{new_project_folder}"):
+ arcpy.AddMessage(f"Creating Home Folder: '{os.path.basename(home_folder)}'")
+ arcpy.management.CreateFolder(home_folder, new_project_folder)
+ arcpy.AddMessage(arcpy.GetMessages())
+ else:
+ arcpy.AddMessage(f"Home Folder: '{os.path.basename(home_folder)}' Exists")
+ if not arcpy.Exists(rf"{home_folder}\{new_project_folder}\{new_project_folder}.gdb"):
+ arcpy.AddMessage(f"Creating Project GDB: '{os.path.basename(home_folder)}.gdb'")
+ arcpy.management.CreateFileGDB(rf"{home_folder}\{new_project_folder}", f"{new_project_folder}")
+ arcpy.AddMessage(arcpy.GetMessages())
+ else:
+ arcpy.AddMessage(f"Project GDB: {new_project_folder}.gdb exists")
+
+ if not arcpy.Exists(rf"{home_folder}\{new_project_folder}\Scratch"):
+ arcpy.AddMessage("Creating the Scratch Folder")
+ arcpy.management.CreateFolder(rf"{home_folder}\{new_project_folder}", "Scratch")
+ arcpy.AddMessage(arcpy.GetMessages())
+ else:
+ arcpy.AddMessage(f"Scratch Folder: {new_project_folder} exists")
+
+ if not arcpy.Exists(rf"{home_folder}\{new_project_folder}\Scratch\scratch.gdb"):
+ arcpy.AddMessage("Creating the Scratch GDB")
+ arcpy.management.CreateFileGDB(rf"{home_folder}\{new_project_folder}\Scratch", "scratch")
+ arcpy.AddMessage(arcpy.GetMessages())
+ else:
+ arcpy.AddMessage("Scratch GDB Exists")
+
+ for _project_folder in project_folders.split(";"):
+ if not arcpy.Exists(rf"{home_folder}\{new_project_folder}\{_project_folder}"):
+ arcpy.AddMessage(f"Creating Folder: {_project_folder}")
+ arcpy.management.CreateFolder(rf"{home_folder}\{new_project_folder}", _project_folder)
+ arcpy.AddMessage(arcpy.GetMessages())
+ else:
+ arcpy.AddMessage(f"Folder: '{_project_folder}' Exists")
+ del _project_folder
+ if not arcpy.Exists(rf"{home_folder}\{new_project_folder}\{new_project_folder}.aprx"):
+ aprx.saveACopy(rf"{home_folder}\{new_project_folder}\{new_project_folder}.aprx")
+ arcpy.AddMessage(arcpy.GetMessages())
+ else:
+ pass
+
+ _aprx = arcpy.mp.ArcGISProject(rf"{home_folder}\{new_project_folder}\{new_project_folder}.aprx")
+ # Remove maps
+ _maps = _aprx.listMaps()
+ if len(_maps) > 0:
+ for _map in _maps:
+ arcpy.AddMessage(_map.name)
+ aprx.deleteItem(_map)
+ del _map
+ del _maps
+ _aprx.save()
+
+ databases = []
+ databases.append({"databasePath": rf"{home_folder}\{new_project_folder}\{new_project_folder}.gdb", "isDefaultDatabase": True})
+ _aprx.updateDatabases(databases)
+ arcpy.AddMessage(f"Databases: {databases}")
+ del databases
+ _aprx.save()
+
+ toolboxes = []
+ toolboxes.append({"toolboxPath": rf"{home_folder}\DisMAP.atbx", "isDefaultToolbox": True})
+ _aprx.updateToolboxes(toolboxes)
+ arcpy.AddMessage(f"Toolboxes: {toolboxes}")
+ del toolboxes
+ _aprx.save()
+ del _aprx
+
+ # Declared variables
+ del home_folder, aprx
+ # Function parameters
+ del new_project_folder, project_folders
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ #raise SystemExit
+ except SystemExit:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ #raise SystemExit
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ #raise SystemExit
+ except: # noqa: E722 # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ #raise SystemExit
+ else:
+ pass
+ return True
+ finally:
+ pass
+if __name__ == "__main__":
+ try:
+ new_project_folder = arcpy.GetParameterAsText(0)
+ project_folders = arcpy.GetParameterAsText(1)
+
+ if not new_project_folder:
+ new_project_folder = "February 1 2026"
+ else:
+ pass
+
+ if not project_folders:
+ project_folders = "CRFs;CSV_Data;Dataset_Shapefiles;Images;Layers;Metadata_Export;Publish"
+ else:
+ pass
+
+ script_tool(new_project_folder, project_folders)
+ arcpy.SetParameterAsText(3, "Result")
+
+ del new_project_folder, project_folders
+
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/dismap_tools.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/dismap_tools.py
new file mode 100644
index 0000000..bbc4f7e
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/dismap_tools.py
@@ -0,0 +1,3360 @@
+# -*- coding: utf-8 -*-
+# -------------------------------------------------------------------------------
+# Name: py
+# Purpose: Common DisMAP functions
+#
+# Author: john.f.kennedy
+#
+# Created: 12/01/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+# -------------------------------------------------------------------------------
+# built-ins first
+import os
+import sys
+import traceback
+import inspect
+
+import arcpy # third-parties second
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def parse_xml_file_format_and_save(csv_data_folder="", xml_file="", sort=False):
+ try:
+
+ import json
+ json_path = rf"{csv_data_folder}\root_dict.json"
+ #print(csv_data_folder)
+ with open(json_path, "r") as json_file:
+ root_dict = json.load(json_file)
+ del json_file
+ del json_path
+ del json
+
+## root_dict = {"Esri" : 0, "dataIdInfo" : 1, "mdChar" : 2,
+## "mdContact" : 3, "mdDateSt" : 4, "mdFileID" : 5,
+## "mdLang" : 6, "mdMaint" : 7, "mdHrLv" : 8,
+## "mdHrLvName" : 9, "refSysInfo" : 10, "spatRepInfo" : 11,
+## "spdoinfo" : 12, "dqInfo" : 13, "distInfo" : 14,
+## "eainfo" : 15, "contInfo" : 16, "spref" : 17,
+## "spatRepInfo" : 18, "dataSetFn" : 19, "Binary" : 100,}
+
+ from lxml import etree
+
+ parser = etree.XMLParser(encoding='UTF-8', remove_blank_text=True)
+ tree = etree.parse(xml_file, parser=parser) # To parse from a string, use the fromstring() function instead.
+ del parser
+
+ if sort:
+ root = tree.getroot()
+ for child in root.xpath("."):
+ child[:] = sorted(child, key=lambda x: root_dict[x.tag])
+ del child
+ del root
+ del sort
+ etree.indent(tree, space=' ')
+ tree.write(xml_file, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+ del tree
+ del xml_file, etree
+ del root_dict
+ del csv_data_folder
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def print_xml_file(xml_file="", sort=False):
+ try:
+ root_dict = {"Esri" : 0, "dataIdInfo" : 1, "mdChar" : 2,
+ "mdContact" : 3, "mdDateSt" : 4, "mdFileID" : 5,
+ "mdLang" : 6, "mdMaint" : 7, "mdHrLv" : 8,
+ "mdHrLvName" : 9, "refSysInfo" : 10, "spatRepInfo" : 11,
+ "spdoinfo" : 12, "dqInfo" : 13, "distInfo" : 14,
+ "eainfo" : 15, "contInfo" : 16, "spref" : 17,
+ "spatRepInfo" : 18, "dataSetFn" : 19, "Binary" : 100,}
+
+ from lxml import etree
+ parser = etree.XMLParser(encoding='UTF-8', remove_blank_text=True)
+ tree = etree.parse(xml_file, parser=parser) # To parse from a string, use the fromstring() function instead.
+ del parser
+
+ if sort:
+ root = tree.getroot()
+ for child in root.xpath("."):
+ child[:] = sorted(child, key=lambda x: root_dict[x.tag])
+ del child
+ del root
+ del sort
+ etree.indent(tree, space=' ')
+ arcpy.AddMessage(etree.tostring(tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ del tree
+ del xml_file, etree
+ del root_dict
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def add_fields(csv_data_folder="", in_table=""):
+ try:
+ # Import this Python module
+ #import dev_dismap_tools
+ #importlib.reload(dev_dismap_tools)
+
+ table = os.path.basename(in_table)
+ project_gdb = os.path.dirname(in_table)
+
+ _field_definitions = field_definitions(csv_data_folder, "")
+ _table_definitions = table_definitions(csv_data_folder, "")
+ del csv_data_folder
+
+ # set workspace environment
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = r"Scratch\\scratch.gdb"
+ arcpy.SetLogMetadata(True)
+
+ if "_IDW_Region" in table:
+ table = "IDW_Data"
+ elif "GFDL_Region" in table:
+ table = "GFDL_Data"
+ elif "GLMME_Region" in table:
+ table = "GLMME_Data"
+ elif "Indicators" in table:
+ table = "Indicators"
+ else:
+ table = table
+
+ fields = _table_definitions[table]
+ del _table_definitions
+
+ field_definition_list = []
+ for field in fields:
+ field_definition_list.append([
+ _field_definitions[field]["field_name"],
+ _field_definitions[field]["field_type"],
+ _field_definitions[field]["field_aliasName"],
+ _field_definitions[field]["field_length"],
+ ])
+ del field
+ del fields
+
+ arcpy.AddMessage(f"Adding Fields to Table: {table}")
+ # arcpy.AddMessage(in_table)
+ # arcpy.AddMessage(field_definition_list)
+ arcpy.management.AddFields(in_table=in_table, field_description=field_definition_list, template="")
+ arcpy.AddMessage("\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t")))
+
+ # Declared Variables
+ del field_definition_list, _field_definitions
+ del project_gdb, table
+ # Imports
+ #del dev_dismap_tools
+ # Function parameters
+ del in_table
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def alter_fields(csv_data_folder="", in_table=""):
+ try:
+ project_gdb = os.path.dirname(in_table)
+
+ _field_definitions = field_definitions(csv_data_folder, "")
+ del csv_data_folder
+
+ arcpy.env.workspace = project_gdb
+ arcpy.SetLogMetadata(True)
+
+ if arcpy.Exists(in_table):
+ arcpy.AddMessage(f"Altering Field Aliases for Table: {os.path.basename(in_table)}")
+ #arcpy.AddMessage(f"{table}")
+
+ fields = [f for f in arcpy.ListFields(in_table) if f.type not in ["Geometry", "OID"] and f.name not in ["Shape_Area", "Shape_Length"]]
+
+ for field in fields:
+ field_name = field.name
+ arcpy.AddMessage(f"\tAltering Field: {field_name} {field.type}")
+
+ if field_name in _field_definitions:
+
+ arcpy.AddMessage(f"\t\tAltering Field: {field_name} to: {_field_definitions[field_name]['field_aliasName']}")
+
+ try:
+ arcpy.management.AlterField(
+ in_table=in_table,
+ field=_field_definitions[field_name]["field_name"],
+ new_field_name=_field_definitions[field_name]["field_name"],
+ new_field_alias=_field_definitions[field_name]["field_aliasName"],
+ field_length=_field_definitions[field_name]["field_length"],
+ field_is_nullable="NULLABLE",
+ clear_field_alias="DO_NOT_CLEAR",
+ )
+ arcpy.AddMessage("\t\t\t{0}\n".format(arcpy.GetMessages().replace("\n", "\n\t\t\t")))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+
+ elif field_name not in _field_definitions:
+ arcpy.AddWarning(f"###--->>> Field: {field_name} is not in fieldDefinitions <<<---###")
+ else:
+ pass
+ del field, field_name
+ del fields
+ else:
+ arcpy.AddWarning(f"###--->>> Alter fields: {os.path.basename(in_table)} not found <<<---###")
+
+ del _field_definitions, project_gdb, in_table
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def backup_gdb(project_gdb=""):
+ try:
+
+ arcpy.AddMessage("Making a backup")
+ arcpy.management.Copy(project_gdb, project_gdb.replace(".gdb", f"_Backup.gdb"))
+ arcpy.AddMessage("\t" + arcpy.GetMessages(0).replace("\n", "\n\t"))
+
+ arcpy.AddMessage("Compacting the backup")
+ arcpy.management.Compact(project_gdb.replace(".gdb", f"_Backup.gdb"))
+ arcpy.AddMessage("\t" + arcpy.GetMessages(0).replace("\n", "\n\t"))
+
+ del project_gdb
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def basic_metadata(csv_data_folder="", in_table=""):
+ # Deprecated
+ try:
+
+ table = os.path.basename(in_table)
+ project_gdb = os.path.dirname(in_table)
+
+ metadata_dictionary = metadata_dictionary_json(csv_data_folder, "")
+ del csv_data_folder
+
+ # set workspace environment
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.env.scratchWorkspace = rf"Scratch\\scratch.gdb"
+ arcpy.env.workspace = project_gdb
+ arcpy.SetLogMetadata(True)
+
+ if arcpy.Exists(table):
+ arcpy.AddMessage(f"Adding metadata to: {table}")
+
+ if table.endswith(".crf"):
+ table = table.replace(".crf", "_Mosaic")
+
+ # from arcpy import metadata as md
+ # # https://pro.arcgis.com/en/pro-app/latest/arcpy/metadata/metadata-class.htm
+ # dataset_md = md.Metadata(in_table)
+ # dataset_md.synchronize("ALWAYS", 0)
+ # dataset_md.save()
+ # dataset_md.reload()
+
+ # dataset_md.synchronize("NOT_CREATED", 0)
+ # dataset_md.title = metadata_dictionary[table]["md_title"]
+ # dataset_md.tags = metadata_dictionary[table]["md_tags"]
+ # dataset_md.summary = metadata_dictionary[table]["md_summary"]
+ # dataset_md.description = metadata_dictionary[table]["md_description"]
+ # dataset_md.credits = metadata_dictionary[table]["md_credits"]
+ # dataset_md.accessConstraints = metadata_dictionary[table]["md_access_constraints"]
+ # dataset_md.save()
+ # dataset_md.reload()
+
+ # arcpy.AddMessage(metadata_dictionary[table]["md_title"])
+ # arcpy.AddMessage(metadata_dictionary[table]["md_tags"])
+ # arcpy.AddMessage(metadata_dictionary[table]["md_summary"])
+ # arcpy.AddMessage(metadata_dictionary[table]["md_description"])
+ # arcpy.AddMessage(metadata_dictionary[table]["md_credits"])
+ # arcpy.AddMessage(metadata_dictionary[table]["md_access_constraints"])
+
+ arcpy.AddMessage(f"Adding metadata to: {table} completed")
+
+ #del dataset_md, md
+
+ else:
+ arcpy.AddWarning(f"Adding Metadata: {table} not found")
+
+ del metadata_dictionary, table, project_gdb, in_table
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def check_datasets(datasets=[]):
+ try:
+
+ def formatDateTime(dateTime):
+ from datetime import datetime, timezone
+
+ d = datetime.strptime(dateTime, "%Y-%m-%dT%H:%M:%S.%f")
+ d = d.replace(tzinfo=timezone.utc)
+ d = d.astimezone()
+ return d.strftime("%b %d %Y %I:%M:%S %p")
+
+ for dataset in datasets:
+ # Create a Describe object from the feature class
+ #
+ desc = arcpy.da.Describe(dataset)
+ # Print some feature class properties
+ # baseName
+ # catalogPath
+ # children
+ # childrenExpanded
+ # Examine children and print their name and dataType
+ #
+ # arcpy.AddMessage("Children:")
+ # for child in desc.children:
+ # arcpy.AddMessage("\t%s = %s" % (child.name, child.dataType))
+ # dataElementType
+ # dataType
+ # extension
+ # file
+ # fullPropsRetrieved
+ # metadataRetrieved
+
+ arcpy.AddMessage(f"Dataset Name: {desc['name']}")
+ arcpy.AddMessage(f"\tDataset Path: {desc['path']}")
+ arcpy.AddMessage(f"\tDataset Type: {desc['dataType']}")
+
+ if desc["dataType"] == "FeatureClass":
+ # arcpy.AddMessage(f"\tFeature Type: {desc['featureType']}")
+ arcpy.AddMessage(f"\tData Type: {desc['dataType']}")
+ # arcpy.AddMessage(f"\tDataset Type: {desc['datasetType']}")
+ arcpy.AddMessage(f"\tShape Type: {desc['shapeType']}")
+ arcpy.AddMessage(f"\tDate Created: {formatDateTime(desc['dateCreated'])}")
+ arcpy.AddMessage(f"\tDate Accessed: {formatDateTime(desc['dateAccessed'])}")
+ arcpy.AddMessage(f"\tDate Modified: {formatDateTime(desc['dateModified'])}")
+ arcpy.AddMessage(f"\tSize: {round(desc['size'] * 0.000001, 2)} MB")
+ arcpy.AddMessage(f"\tSpatial Reference: {desc['spatialReference'].name}")
+ # arcpy.AddMessage(f"Spatial Index: {str(desc.hasSpatialIndex)}")
+ # arcpy.AddMessage(f"Has M: {desc.hasM}")
+ # arcpy.AddMessage(f"Has Z: {desc.hasZ}")
+ # arcpy.AddMessage(f"Shape Field Name: {desc.shapeFieldName}")
+ # arcpy.AddMessage(f"Split Model: {str(desc.hasSpatialIndex)}")
+ # arcpy.AddMessage(desc["fields"])
+ fields = [f.name for f in desc["fields"]]
+ oid = desc["OIDFieldName"]
+ # Use SQL TOP to sort field values
+ arcpy.AddMessage(f"\t{', '.join(fields)}")
+ for row in arcpy.da.SearchCursor(dataset, fields, f"{oid} <= 5"):
+ arcpy.AddMessage(f"\t{row}")
+ del row
+ del oid, fields
+
+ ## fields = [f.name for f in desc["fields"]]
+ ## oid_field_name = desc["OIDFieldName"]
+ ## # Use SQL TOP to sort field values
+ ## arcpy.AddMessage(f"\t{', '.join(fields)}")
+ ## oids = [oid for oid in arcpy.da.SearchCursor(dataset, f"{oid_field_name}")]
+ ## from random import sample
+ ## random_indices = sample(oids, 5)
+ ## del sample
+ ## for row in arcpy.da.SearchCursor(dataset, fields, f"{oid_field_name} in {random_indices}"):
+ ## arcpy.AddMessage(f"\t{row}")
+ ## del row
+ ## del oids, random_indices, oid_field_name, fields
+
+ elif desc["dataType"] == "RasterDataset":
+ arcpy.AddMessage(f"\tData Type: {desc['dataType']}")
+ arcpy.AddMessage(f"\tCell Size: {desc['meanCellHeight']} x {desc['meanCellWidth']}")
+ arcpy.AddMessage(f"\tExtent: {desc['extent']}")
+ arcpy.AddMessage(f"\tHeight & Width: {desc['height']} x {desc['width']}")
+ arcpy.AddMessage(f"\tSpatial Reference: {desc['spatialReference'].name}")
+
+ # for key in sorted(desc):
+ # value = str(desc[key])
+ # arcpy.AddMessage(f"Key: '{key:<30}' Value: '{value:<25}'")
+ # del value, key
+
+ elif desc["dataType"] == "Table":
+ arcpy.AddMessage(f"\tData Type: {desc['dataType']}")
+ arcpy.AddMessage(f"\tDate Created: {formatDateTime(desc['dateCreated'])}")
+ arcpy.AddMessage(f"\tDate Accessed: {formatDateTime(desc['dateAccessed'])}")
+ arcpy.AddMessage(f"\tDate Modified: {formatDateTime(desc['dateModified'])}")
+ arcpy.AddMessage(f"\tSize: {round(desc['size'] * 0.000001, 2)} MB")
+ # arcpy.AddMessage(desc["fields"])
+ fields = [f.name for f in desc["fields"]]
+ oid = desc["OIDFieldName"]
+ # Use SQL TOP to sort field values
+ arcpy.AddMessage(f"\t{', '.join(fields)}")
+ for row in arcpy.da.SearchCursor(dataset, fields, f"{oid} <= 5"):
+ arcpy.AddMessage(f"\t{row}")
+ del row
+ del oid, fields
+
+ elif desc["dataType"] == "MosaicDataset":
+ arcpy.AddMessage(f"\t DSID: {desc['DSID']}")
+ arcpy.AddMessage(f"\t JPEGQuality: {desc['JPEGQuality']}")
+ arcpy.AddMessage(f"\t LERCTolerance: {desc['LERCTolerance']}")
+ arcpy.AddMessage(f"\t MExtent: {desc['MExtent']}")
+ arcpy.AddMessage(f"\t OIDFieldName: {desc['OIDFieldName']}")
+ arcpy.AddMessage(f"\t ZExtent: {desc['ZExtent']}")
+ arcpy.AddMessage(f"\t allowedCompressionMethods: {desc['allowedCompressionMethods']}")
+ arcpy.AddMessage(f"\t allowedFields: {desc['allowedFields']}")
+ arcpy.AddMessage(f"\t allowedMensurationCapabilities: {desc['allowedMensurationCapabilities']}")
+ arcpy.AddMessage(f"\t allowedMosaicMethods: {desc['allowedMosaicMethods']}")
+ arcpy.AddMessage(f"\t bandCount: {desc['bandCount']}")
+ arcpy.AddMessage(f"\t baseName: {desc['baseName']}")
+ arcpy.AddMessage(f"\t blendWidth: {desc['blendWidth']}")
+ arcpy.AddMessage(f"\t blendWidthUnits: {desc['blendWidthUnits']}")
+ arcpy.AddMessage(f"\t catalogPath: {desc['catalogPath']}")
+ arcpy.AddMessage(f"\t cellSizeToleranceFactor: {desc['cellSizeToleranceFactor']}")
+ arcpy.AddMessage(f"\t children: {desc['children']}")
+ arcpy.AddMessage(f"\t childrenExpanded: {desc['childrenExpanded']}")
+ arcpy.AddMessage(f"\t childrenNames: {desc['childrenNames']}")
+ arcpy.AddMessage(f"\t clipToBoundary: {desc['clipToBoundary']}")
+ arcpy.AddMessage(f"\t compressionType: {desc['compressionType']}")
+ arcpy.AddMessage(f"\t dataElementType: {desc['dataElementType']}")
+ arcpy.AddMessage(f"\t dataType: {desc['dataType']}")
+ arcpy.AddMessage(f"\t datasetType: {desc['datasetType']}")
+ arcpy.AddMessage(f"\t defaultCompressionMethod: {desc['defaultCompressionMethod']}")
+ arcpy.AddMessage(f"\t defaultMensurationCapability: {desc['defaultMensurationCapability']}")
+ arcpy.AddMessage(f"\t defaultMosaicMethod: {desc['defaultMosaicMethod']}")
+ arcpy.AddMessage(f"\t defaultResamplingMethod: {desc['defaultResamplingMethod']}")
+ arcpy.AddMessage(f"\t defaultSubtypeCode: {desc['defaultSubtypeCode']}")
+ arcpy.AddMessage(f"\t endTimeField: {desc['endTimeField']}")
+ arcpy.AddMessage(f"\t extent: {desc['extent']}")
+ arcpy.AddMessage(f"\t featureType: {desc['featureType']}")
+ arcpy.AddMessage(f"\t fields: {desc['fields']}")
+ arcpy.AddMessage(f"\t file: {desc['file']}")
+ arcpy.AddMessage(f"\t footprintMayContainNoData: {desc['footprintMayContainNoData']}")
+ arcpy.AddMessage(f"\t format: {desc['format']}")
+ arcpy.AddMessage(f"\t fullPropsRetrieved: {desc['fullPropsRetrieved']}")
+ arcpy.AddMessage(f"\t hasOID: {desc['hasOID']}")
+ arcpy.AddMessage(f"\t hasSpatialIndex: {desc['hasSpatialIndex']}")
+ arcpy.AddMessage(f"\t indexes: {desc['indexes']}")
+ arcpy.AddMessage(f"\t isInteger: {desc['isInteger']}")
+ arcpy.AddMessage(f"\t isTimeInUTC: {desc['isTimeInUTC']}")
+ arcpy.AddMessage(f"\t maxDownloadImageCount: {desc['maxDownloadImageCount']}")
+ arcpy.AddMessage(f"\t maxDownloadSizeLimit: {desc['maxDownloadSizeLimit']}")
+ arcpy.AddMessage(f"\t maxRastersPerMosaic: {desc['maxRastersPerMosaic']}")
+ arcpy.AddMessage(f"\t maxRecordsReturned: {desc['maxRecordsReturned']}")
+ arcpy.AddMessage(f"\t maxRequestSizeX: {desc['maxRequestSizeX']}")
+ arcpy.AddMessage(f"\t maxRequestSizeY: {desc['maxRequestSizeY']}")
+ arcpy.AddMessage(f"\t minimumPixelContribution: {desc['minimumPixelContribution']}")
+ arcpy.AddMessage(f"\t mosaicOperator: {desc['mosaicOperator']}")
+ arcpy.AddMessage(f"\t name: {desc['name']}")
+ arcpy.AddMessage(f"\t orderField: {desc['orderField']}")
+ arcpy.AddMessage(f"\t path: {desc['path']}")
+ arcpy.AddMessage(f"\t permanent: {desc['permanent']}")
+ arcpy.AddMessage(f"\t rasterFieldName: {desc['rasterFieldName']}")
+ arcpy.AddMessage(f"\t rasterMetadataLevel: {desc['rasterMetadataLevel']}")
+ arcpy.AddMessage(f"\t shapeFieldName: {desc['shapeFieldName']}")
+ arcpy.AddMessage(f"\t shapeType: {desc['shapeType']}")
+ arcpy.AddMessage(f"\t sortAscending: {desc['sortAscending']}")
+ arcpy.AddMessage(f"\t spatialReference: {desc['spatialReference']}")
+ arcpy.AddMessage(f"\t startTimeField: {desc['startTimeField']}")
+ arcpy.AddMessage(f"\t supportsBigInteger: {desc['supportsBigInteger']}")
+ arcpy.AddMessage(f"\t supportsBigObjectID: {desc['supportsBigObjectID']}")
+ arcpy.AddMessage(f"\t supportsDateOnly: {desc['supportsDateOnly']}")
+ arcpy.AddMessage(f"\t supportsTimeOnly: {desc['supportsTimeOnly']}")
+ arcpy.AddMessage(f"\t supportsTimestampOffset: {desc['supportsTimestampOffset']}")
+ arcpy.AddMessage(f"\t timeValueFormat: {desc['timeValueFormat']}")
+ arcpy.AddMessage(f"\t useTime: {desc['useTime']}")
+ arcpy.AddMessage(f"\t viewpointSpacingX: {desc['viewpointSpacingX']}")
+ arcpy.AddMessage(f"\t viewpointSpacingY: {desc['viewpointSpacingY']}")
+ arcpy.AddMessage(f"\t workspace: {desc['workspace']}")
+
+ # arcpy.AddMessage(desc["fields"])
+ fields = [f.name for f in desc["fields"]]
+ oid = desc["OIDFieldName"]
+ # Use SQL TOP to sort field values
+ arcpy.AddMessage(f"\t{', '.join(fields)}")
+ for row in arcpy.da.SearchCursor(dataset, fields, f"{oid} <= 5"):
+ arcpy.AddMessage(f"\t{row}")
+ del row
+ del oid, fields
+
+ elif desc["dataType"]:
+ arcpy.AddWarning(desc["dataType"])
+
+ else:
+ arcpy.AddWarning("No data to describe!!")
+
+ del desc, dataset
+
+ del formatDateTime, datasets
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ traceback.print_exc()
+ sys.exit()
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def check_transformation(ds, cs):
+ dsc_in = arcpy.Describe(ds)
+ insr = dsc_in.spatialReference
+
+ # if output coordinate system is set and is different than the input coordinate system
+ if cs and (cs.name != insr.name):
+ translist = arcpy.ListTransformations(insr, cs, dsc_in.extent)
+ trans = translist[0] if translist else ""
+ # arcpy.AddMessage(f"\t{trans}\n")
+ # for trans in translist:
+ # arcpy.AddMessage(f"\t{trans}")
+ return trans
+
+def clear_folder(folder=""):
+ try:
+ import shutil
+
+ for filename in os.listdir(folder):
+ file_path = os.path.join(folder, filename)
+ try:
+ if os.path.isfile(file_path) or os.path.islink(file_path):
+ arcpy.AddMessage(f"Removing: {os.path.basename(file_path)}")
+ os.unlink(file_path)
+ elif os.path.isdir(file_path):
+ arcpy.AddMessage(f"Removing: {os.path.basename(file_path)}")
+ shutil.rmtree(file_path)
+ except Exception as e:
+ arcpy.AddError(f"Failed to delete {os.path.basename(file_path)}. Reason: {e}")
+ del filename, file_path
+
+ # Imports
+ del shutil
+ # Function Parameter
+ del folder
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ sys.exit()
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ sys.exit()
+ return False
+ else:
+ return True
+
+def compare_metadata_xml(file1="", file2=""):
+ """This requires the use of the clone ArcGIS Pro env and the installation of xmldiff."""
+ # https://buildmedia.readthedocs.org/media/pdf/xmldiff/latest/xmldiff.pdf
+ try:
+ # Test if passed workspace exists, if not raise Exception
+ if not os.path.exists(rf"{file1}") or not os.path.exists(rf"{file2}"):
+ raise Exception(f"{os.path.basename(file1)} or {os.path.basename(file2)} is missing!!")
+
+ from lxml import etree
+ from xmldiff import main, formatting
+
+ # Examples
+ # diff = main.diff_files(file1, file2, formatter=formatting.XMLFormatter())
+ # diff = main.diff_files(file1, file2, formatter=formatting.XMLFormatter(normalize=formatting.WS_BOTH, pretty_print=True))
+ # The DiffFormatter creates a script of steps to take to make file2 like file1
+ __diff = main.diff_files(file1, file2, formatter=formatting.DiffFormatter())
+ # If there are differences
+ if __diff:
+ __diff = main.diff_files(file1, file2, formatter=formatting.XMLFormatter())
+ return __diff
+ else:
+ return None
+
+ # Declared Variables
+ del __diff
+ # Imports
+ del etree, main, formatting
+ # Function Parameters
+ del file1, file2
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return __diff
+ finally:
+ if "__diff" in locals().keys(): del __diff
+ # Imports
+ del etree, main, formatting
+ # Function Parameters
+ del file1, file2
+
+def convertSeconds(seconds):
+ try:
+ _min, _sec = divmod(seconds, 60)
+ _hour, _min = divmod(_min, 60)
+ return f"{int(_hour)}:{int(_min)}:{_sec:.3f}"
+
+ except: # noqa: E722
+ traceback.print_exc()
+
+##def calculate_core_species(table):
+## try:
+##
+## region_gdb = os.path.dirname(table)
+##
+## arcpy.env.workspace = region_gdb
+## arcpy.env.scratchWorkspace = region_gdb
+## arcpy.env.parallelProcessingFactor = "100%"
+## arcpy.env.overwriteOutput = True
+## arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+## arcpy.SetLogMetadata(True)
+## arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+## # 1—If a tool produces a warning or an error, it will throw an exception.
+## # 2—If a tool produces an error, it will throw an exception. This is the default.
+## arcpy.SetMessageLevels(["NORMAL"]) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+## del region_gdb
+##
+## # def unique_years(table):
+## # with arcpy.da.SearchCursor(table, ["Year"]) as cursor:
+## # return sorted({row[0] for row in cursor})
+##
+## def unique_values(table, field):
+## with arcpy.da.SearchCursor(table, [field]) as cursor:
+## return sorted({row[0] for row in cursor}) # Uses list comprehension
+##
+## # Get unique list of years from the table
+## all_years = unique_values(table, "Year")
+##
+## PrintListOfYears = False
+## if PrintListOfYears:
+## # Print list of years
+## arcpy.AddMessage(f"--> Years: {', '.join([str(y) for y in all_years])}")
+##
+## # Get minimum year (first year) and maximum year (last year)
+## min_year, max_year = min(all_years), max(all_years)
+##
+## # Print min year
+## arcpy.AddMessage(f"--> Min Year: {min_year} and Max Year: {max_year}")
+##
+## del min_year, max_year
+##
+## del PrintListOfYears
+##
+## arcpy.AddMessage(f"\t Creating {os.path.basename(table)} Table View")
+##
+## species_table_view = arcpy.management.MakeTableView(table, f"{os.path.basename(table)} Table View")
+##
+## unique_species = unique_values(species_table_view, "Species")
+##
+## for unique_specie in unique_species:
+## arcpy.AddMessage(f"\t\t Unique Species: {unique_specie}")
+##
+## # Replace a layer/table view name with a path to a dataset (which can be a layer file) or create the layer/table view within the script
+## # The following inputs are layers or table views: "ai_csv"
+## arcpy.management.SelectLayerByAttribute(in_layer_or_view=species_table_view, selection_type="NEW_SELECTION", where_clause=f"Species = '{unique_specie}' AND WTCPUE > 0.0 AND DistributionProjectName = 'NMFS/Rutgers IDW Interpolation'")
+##
+## all_specie_years = unique_values(species_table_view, "Year")
+##
+## # arcpy.AddMessage(f"\t\t\t Years: {', '.join([str(y) for y in all_specie_years])}")
+##
+## # arcpy.AddMessage(f"\t\t Select Species ({unique_specie}) by attribute")
+##
+## arcpy.management.SelectLayerByAttribute(in_layer_or_view=species_table_view, selection_type="NEW_SELECTION", where_clause=f"Species = '{unique_specie}'")
+##
+## # arcpy.AddMessage(f"\t Set CoreSpecies to Yes or No")
+##
+## if all_years == all_specie_years:
+## arcpy.AddMessage(f"\t\t\t {unique_specie} is a Core Species")
+## arcpy.management.CalculateField(in_table=species_table_view, field="CoreSpecies", expression="'Yes'", expression_type="PYTHON", code_block="")
+## else:
+## arcpy.AddMessage(f"\t\t\t @@@@ {unique_specie} is not a Core Species @@@@")
+## arcpy.management.CalculateField(in_table=species_table_view, field="CoreSpecies", expression="'No'", expression_type="PYTHON", code_block="")
+##
+## arcpy.management.SelectLayerByAttribute(species_table_view, "CLEAR_SELECTION")
+## del unique_specie, all_specie_years
+##
+## arcpy.management.Delete(f"{os.path.basename(table)} Table View")
+## del species_table_view, unique_species, all_years
+## del unique_values
+## del table
+##
+## except KeyboardInterrupt:
+## raise SystemExit
+## except arcpy.ExecuteWarning:
+## arcpy.AddWarning(arcpy.GetMessages(1))
+## except arcpy.ExecuteError:
+## arcpy.AddError(arcpy.GetMessages(2))
+## traceback.print_exc()
+## raise SystemExit
+## except Exception:
+## arcpy.AddError(arcpy.GetMessages(2))
+## traceback.print_exc()
+## raise SystemExit
+## except: # noqa: E722
+## arcpy.AddError(arcpy.GetMessages(2))
+## traceback.print_exc()
+## raise SystemExit
+## else:
+## # While in development, leave here. For test, move to finally
+## rk = [key for key in locals().keys() if not key.startswith('__')]
+## if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+## return True
+## finally:
+## pass
+
+def dataset_title_dict(project_gdb=""):
+ try:
+ # Test if passed workspace exists, if not raise Exception
+ if not arcpy.Exists(project_gdb):
+ arcpy.AddError(project_gdb)
+ arcpy.AddError(f"{os.path.basename(project_gdb)} is missing!!")
+ sys.exit()
+
+ if "Scratch" in project_gdb:
+ project = os.path.basename(os.path.dirname(os.path.dirname(project_gdb)))
+ else:
+ project = os.path.basename(os.path.dirname(project_gdb))
+
+ project_folder = os.path.dirname(project_gdb)
+ crf_folder = rf"{project_folder}\CRFs"
+ _credits = "These data were produced by NMFS OST."
+ access_constraints = "***No Warranty*** The user assumes the entire risk related to its use of these data. NMFS is providing these data 'as is' and NMFS disclaims any and all warranties, whether express or implied, including (without limitation) any implied warranties of merchantability or fitness for a particular purpose. No warranty expressed or implied is made regarding the accuracy or utility of the data on any other system or for general or scientific purposes, nor shall the act of distribution constitute any such warranty. It is strongly recommended that careful attention be paid to the contents of the metadata file associated with these data to evaluate dataset limitations, restrictions or intended use. In no event will NMFS be liable to you or to any third party for any direct, indirect, incidental, consequential, special or exemplary damages or lost profit resulting from any use or misuse of these data."
+
+ __datasets_dict = {}
+
+ dataset_codes = {row[0] : [row[1], row[2], row[3], row[4]] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"), ["DatasetCode", "PointFeatureType", "DistributionProjectCode", "Region", "Season"])}
+ #for dataset_code in dataset_codes:
+ # dataset_codes[dataset_code] = [s for s in dataset_codes[dataset_code] if s.strip()]
+ # #print(f"Dataset Code: {dataset_code}\n\t{dataset_codes[dataset_code]}")
+
+ for dataset_code in dataset_codes:
+ point_feature_type = dataset_codes[dataset_code][0] if dataset_codes[dataset_code][0] else ""
+ distribution_project_code = dataset_codes[dataset_code][1] if dataset_codes[dataset_code][1] else ""
+ region = dataset_codes[dataset_code][2] if dataset_codes[dataset_code][2] else dataset_code.replace("_", " ")
+ season = dataset_codes[dataset_code][3] if dataset_codes[dataset_code][3] else ""
+
+ tags = f"DisMap, {region}, {season}" if season else f"DisMap, {region}"
+ #tags = f"{tags}, distribution, seasonal distribution, fish, invertebrates, climate change, fishery-independent surveys, ecological dynamics, oceans, biosphere, earth science, species/population interactions, aquatic sciences, fisheries, range changes"
+ summary = "These data were created as part of the DisMAP project to enable visualization and analysis of changes in fish and invertebrate distributions"
+
+ #arcpy.AddMessage(f"Dateset Code: {dataset_code}")
+
+ if distribution_project_code == "IDW":
+ table_name = f"{dataset_code}_{distribution_project_code}"
+ table_name_s = f"{table_name}_{date_code(project)}"
+ table_name_st = f"{region} {season} Table {date_code(project)}".replace(' ',' ')
+
+ #arcpy.AddMessage(f"\tProcessing: {table_name}")
+
+ __datasets_dict[table_name] = {"Dataset Service" : table_name_s,
+ "Dataset Service Title" : table_name_st,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"This table represents the CSV Data files in ArcGIS format",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ del table_name, table_name_s, table_name_st
+
+ table_name = f"{dataset_code}_{distribution_project_code}"
+ sample_locations_fc = f"{table_name}_{point_feature_type.replace(' ', '_')}"
+ sample_locations_fcs = f"{table_name.replace('_IDW', '')}_{point_feature_type.replace(' ', '_')}_{date_code(project)}"
+ feature_service_title = f"{region} {season} {point_feature_type} {date_code(project)}".replace(' ',' ')
+ sample_locations_fcst = f"{feature_service_title}"
+ del feature_service_title
+
+ __datasets_dict[sample_locations_fc] = {"Dataset Service" : sample_locations_fcs,
+ "Dataset Service Title" : sample_locations_fcst,
+ "Tags" : tags,
+ "Summary" : f"{summary}. These layers provide information on the spatial extent/boundaries of the bottom trawl surveys. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Description" : f"This survey points layer provides information on both the locations where species are caught in several NOAA Fisheries surveys and the amount (i.e., biomass weight catch per unit effort, standardized to kg/ha) of each species that was caught at each location. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #arcpy.AddMessage(f"\tSample Locations FC: {sample_locations_fc}")
+ #arcpy.AddMessage(f"\tSample Locations FCS: {sample_locations_fcs}")
+ #arcpy.AddMessage(f"\tSample Locations FST: {sample_locations_fcst}")
+
+ del table_name, sample_locations_fc, sample_locations_fcs, sample_locations_fcst
+
+ dataset_code = f"{dataset_code}_{distribution_project_code}" if distribution_project_code not in dataset_code else dataset_code
+
+ # Bathymetry
+ bathymetry_r = f"{dataset_code}_Bathymetry"
+ bathymetry_rs = f"{dataset_code}_Bathymetry_{date_code(project)}"
+ feature_service_title = f"{region} {season} Bathymetry {date_code(project)}".replace(' ',' ')
+ bathymetry_rst = f"{feature_service_title}"
+ del feature_service_title
+
+ #arcpy.AddMessage(f"\tProcessing: {bathymetry_r}")
+
+ __datasets_dict[bathymetry_r] = {"Dataset Service" : bathymetry_rs,
+ "Dataset Service Title" : bathymetry_rst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"The bathymetry dataset represents the ocean depth at that grid cell.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #arcpy.AddMessage(f"\tBathymetry R: {bathymetry_r}")
+ #arcpy.AddMessage(f"\tBathymetry RS: {bathymetry_rs}")
+ #arcpy.AddMessage(f"\tBathymetry RST: {bathymetry_rst}")
+
+ del bathymetry_r, bathymetry_rs, bathymetry_rst
+
+ # Boundary
+ boundary_fc = f"{dataset_code}_Boundary"
+ boundary_fcs = f"{dataset_code}_Boundary_{date_code(project)}"
+ feature_service_title = f"{region} {season} Boundary {date_code(project)}".replace(' ',' ')
+ boundary_fcst = f"{feature_service_title}"
+ del feature_service_title
+
+ #arcpy.AddMessage(f"\tProcessing: {boundary_fc}")
+
+ __datasets_dict[boundary_fc] = {"Dataset Service" : boundary_fcs,
+ "Dataset Service Title" : boundary_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"These files contain the spatial boundaries of the NOAA Fisheries Bottom-trawl surveys. This data set covers 8 regions of the United States: Northeast, Southeast, Gulf of Mexico, West Coast, Eastern Bering Sea, Aleutian Islands, Gulf of Alaska, and Hawai'i Islands.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #arcpy.AddMessage(f"\tBoundary FC: {boundary_fc}")
+ #arcpy.AddMessage(f"\tBoundary FCS: {boundary_fcs}")
+ #arcpy.AddMessage(f"\tBoundary FCST: {boundary_fcst}")
+
+ del boundary_fc, boundary_fcs, boundary_fcst
+
+ # Boundary Line
+ boundary_line_fc = f"{dataset_code}_Boundary_Line"
+ boundary_line_fcs = f"{dataset_code}_Boundary_Line_{date_code(project)}"
+ feature_service_title = f"{region} {season} Boundary Line {date_code(project)}".replace(' ',' ')
+ boundary_line_fcst = f"{feature_service_title}"
+ del feature_service_title
+
+ #arcpy.AddMessage(f"\tProcessing: {boundary_line_fc}")
+
+ __datasets_dict[boundary_line_fc] = {"Dataset Service" : boundary_line_fcs,
+ "Dataset Service Title" : boundary_line_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"These files contain the spatial boundaries of the NOAA Fisheries Bottom-trawl surveys. This data set covers 8 regions of the United States: Northeast, Southeast, Gulf of Mexico, West Coast, Eastern Bering Sea, Aleutian Islands, Gulf of Alaska, and Hawai'i Islands.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #arcpy.AddMessage(f"\tBoundary FC: {boundary_line_fc}")
+ #arcpy.AddMessage(f"\tBoundary FCS: {boundary_line_fcs}")
+ #arcpy.AddMessage(f"\tBoundary FCST: {boundary_line_fcst}")
+
+ del boundary_line_fc, boundary_line_fcs, boundary_line_fcst
+
+ # Extent Points
+ extent_points_fc = f"{dataset_code}_Extent_Points"
+ extent_points_fcs = f"{dataset_code}_Extent_Points_{date_code(project)}"
+ feature_service_title = f"{region} {season} Extent Points {date_code(project)}".replace(' ',' ')
+ extent_points_fcst = f"{feature_service_title}"
+ del feature_service_title
+
+ #arcpy.AddMessage(f"\tProcessing: {extent_points_fc}")
+ __datasets_dict[extent_points_fc] = {"Dataset Service" : extent_points_fcs,
+ "Dataset Service Title" : extent_points_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"The Extent Points layer represents the extent of the model region.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+ #arcpy.AddMessage(f"\tExtent Points FC: {extent_points_fc}")
+ #arcpy.AddMessage(f"\tExtent Points FCS: {extent_points_fcs}")
+ #arcpy.AddMessage(f"\tExtent Points FCST: {extent_points_fcst}")
+ del extent_points_fc, extent_points_fcs, extent_points_fcst
+
+ fishnet_fc = f"{dataset_code}_Fishnet"
+ fishnet_fcs = f"{dataset_code}_Fishnet_{date_code(project)}"
+ feature_service_title = f"{region} {season} Fishnet {date_code(project)}".replace(' ',' ')
+ fishnet_fcst = f"{feature_service_title}"
+ del feature_service_title
+
+ #arcpy.AddMessage(f"\tProcessing: {fishnet_fc}")
+
+ __datasets_dict[fishnet_fc] = {"Dataset Service" : fishnet_fcs,
+ "Dataset Service Title" : fishnet_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"The Fishnet is used to create the latitude and longitude rasters.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #arcpy.AddMessage(f"\tFishnet FC: {fishnet_fc}")
+ #arcpy.AddMessage(f"\tFishnet FCS: {fishnet_fcs}")
+ #arcpy.AddMessage(f"\tFishnet FCST: {fishnet_fcst}")
+
+ del fishnet_fc, fishnet_fcs, fishnet_fcst
+
+ indicators_tb = f"{dataset_code}_Indicators"
+ indicators_tbs = f"{dataset_code}_Indicators_{date_code(project)}"
+ feature_service_title = f"{region} {season} Indicators Table {date_code(project)}".replace(' ',' ')
+ indicators_tbst = f"{feature_service_title}"
+ del feature_service_title
+
+ #arcpy.AddMessage(f"\tProcessing: {indicators_tb}")
+
+ __datasets_dict[indicators_tb] = {"Dataset Service" : indicators_tbs,
+ "Dataset Service Title" : indicators_tbst,
+ "Tags" : tags,
+ "Summary" : f"{summary}. This table provides the key metrics used to evaluate a species distribution shift. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Description" : f"These data contain the key distribution metrics of center of gravity, range limits, and depth for each species in the portal. This data set covers 8 regions of the United States: Northeast, Southeast, Gulf of Mexico, West Coast, Bering Sea, Aleutian Islands, Gulf of Alaska, and Hawai'i Islands.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #arcpy.AddMessage(f"\tIndicators T: {indicators_tb}")
+ #arcpy.AddMessage(f"\tIndicators TS: {indicators_tbs}")
+ #arcpy.AddMessage(f"\tIndicators TST: {indicators_tbst}")
+
+ del indicators_tb, indicators_tbs, indicators_tbst
+
+ lat_long_fc = f"{dataset_code}_Lat_Long"
+ lat_long_fcs = f"{dataset_code}_Lat_Long_{date_code(project)}"
+ feature_service_title = f"{region} {season} Lat Long {date_code(project)}".replace(' ',' ')
+ lat_long_fcst = f"{feature_service_title}"
+ del feature_service_title
+
+ #arcpy.AddMessage(f"\tProcessing: {lat_long_fc}")
+
+ __datasets_dict[lat_long_fc] = {"Dataset Service" : lat_long_fcs,
+ "Dataset Service Title" : lat_long_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"The lat_long layer is used to get the latitude & longitude values to create these rasters",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #arcpy.AddMessage(f"\tLat Long FC: {lat_long_fc}")
+ #arcpy.AddMessage(f"\tLat Long FCS: {lat_long_fcs}")
+ #arcpy.AddMessage(f"\tLat Long FCST: {lat_long_fcst}")
+
+ del lat_long_fc, lat_long_fcs, lat_long_fcst
+
+ latitude_r = f"{dataset_code}_Latitude"
+ latitude_rs = f"{dataset_code}_Latitude_{date_code(project)}"
+ feature_service_title = f"{region} {season} Latitude {date_code(project)}".replace(' ',' ')
+ latitude_rst = f"{feature_service_title}"
+ del feature_service_title
+
+ #arcpy.AddMessage(f"\tProcessing: {latitude_r}")
+
+ __datasets_dict[latitude_r] = {"Dataset Service" : latitude_rs,
+ "Dataset Service Title" : latitude_rst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"The Latitude raster",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #arcpy.AddMessage(f"\tLatitude R: {latitude_r}")
+ #arcpy.AddMessage(f"\tLatitude RS: {latitude_rs}")
+ #arcpy.AddMessage(f"\tLatitude RST: {latitude_rst}")
+
+ del latitude_r, latitude_rs, latitude_rst
+
+ layer_species_year_image_name_tb = f"{dataset_code}_LayerSpeciesYearImageName"
+ layer_species_year_image_name_tbs = f"{dataset_code}_LayerSpeciesYearImageName_{date_code(project)}"
+ feature_service_title = f"{region} {season} Layer Species Year Image Name Table {date_code(project)}"
+ layer_species_year_image_name_tbst = f"{feature_service_title}"
+ del feature_service_title
+
+ #arcpy.AddMessage(f"\tProcessing: {layer_species_year_image_name_tb}")
+
+ __datasets_dict[layer_species_year_image_name_tb] = {"Dataset Service" : layer_species_year_image_name_tbs,
+ "Dataset Service Title" : layer_species_year_image_name_tbst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"Layer Species Year Image Name Table",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #arcpy.AddMessage(f"\tLayerSpeciesYearImageName T: {layer_species_year_image_name_tb}")
+ #arcpy.AddMessage(f"\tLayerSpeciesYearImageName TS: {layer_species_year_image_name_tbs}")
+ #arcpy.AddMessage(f"\tLayerSpeciesYearImageName TST: {layer_species_year_image_name_tbst}")
+
+ del layer_species_year_image_name_tb, layer_species_year_image_name_tbs, layer_species_year_image_name_tbst
+
+ longitude_r = f"{dataset_code}_Longitude"
+ longitude_rs = f"{dataset_code}_Longitude_{date_code(project)}"
+ feature_service_title = f"{region} {season} Longitude {date_code(project)}".replace(' ',' ')
+ longitude_rst = f"{feature_service_title}"
+ del feature_service_title
+
+ #arcpy.AddMessage(f"\tProcessing: {longitude_r}")
+
+ __datasets_dict[longitude_r] = {"Dataset Service" : longitude_rs,
+ "Dataset Service Title" : longitude_rst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"The Longitude raster",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #arcpy.AddMessage(f"\tLongitude R: {longitude_r}")
+ #arcpy.AddMessage(f"\tLongitude RS: {longitude_rs}")
+ #arcpy.AddMessage(f"\tLongitude RST: {longitude_rst}")
+
+ del longitude_r, longitude_rs, longitude_rst
+
+ mosaic_r = f"{dataset_code}_Mosaic"
+ mosaic_rs = f"{dataset_code}_Mosaic_{date_code(project)}"
+ feature_service_title = f"{region} {season} {dataset_code[dataset_code.rfind('_')+1:]} Mosaic {date_code(project)}".replace(' ',' ')
+ mosaic_rst = f"{feature_service_title}"
+ del feature_service_title
+
+ #arcpy.AddMessage(f"\tProcessing: {mosaic_r}")
+
+ __datasets_dict[mosaic_r] = {"Dataset Service" : mosaic_rs,
+ "Dataset Service Title" : mosaic_rst,
+ "Tags" : tags,
+ "Summary" : f"{summary}. These interpolated biomass layers provide information on the spatial distribution of species caught in the NOAA Fisheries fisheries-independent surveys. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Description" : f"NOAA Fisheries and its partners conduct fisheries-independent surveys in 8 regions in the US (Northeast, Southeast, Gulf of Mexico, West Coast, Gulf of Alaska, Bering Sea, Aleutian Islands, Hawai’i Islands). These surveys are designed to collect information on the seasonal distribution, relative abundance, and biodiversity of fish and invertebrate species found in U.S. waters. Over 400 species of fish and invertebrates have been identified in these surveys.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #arcpy.AddMessage(f"\tMosaic R: {mosaic_r}")
+ #arcpy.AddMessage(f"\tMosaic RS: {mosaic_rs}")
+ #arcpy.AddMessage(f"\tMosaic RST: {mosaic_rst}")
+
+ del mosaic_r, mosaic_rs, mosaic_rst
+
+ crf_r = f"{dataset_code}.crf"
+ crf_rs = f"{dataset_code}_{date_code(project)}"
+ feature_service_title = f"{region} {season} {dataset_code[dataset_code.rfind('_')+1:]} {date_code(project)}".replace(' ',' ')
+ crf_rst = f"{feature_service_title}"
+ del feature_service_title
+
+ #arcpy.AddMessage(f"\tProcessing: {crf_r}")
+
+ __datasets_dict[crf_r] = {"Dataset Service" : crf_rs,
+ "Dataset Service Title" : crf_rst,
+ "Tags" : tags,
+ "Summary" : f"{summary}. These interpolated biomass layers provide information on the spatial distribution of species caught in the NOAA Fisheries fisheries-independent surveys. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Description" : f"NOAA Fisheries and its partners conduct fisheries-independent surveys in 8 regions in the US (Northeast, Southeast, Gulf of Mexico, West Coast, Gulf of Alaska, Bering Sea, Aleutian Islands, Hawai’i Islands). These surveys are designed to collect information on the seasonal distribution, relative abundance, and biodiversity of fish and invertebrate species found in U.S. waters. Over 400 species of fish and invertebrates have been identified in these surveys.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #arcpy.AddMessage(f"\tCFR R: {crf_r}")
+ #arcpy.AddMessage(f"\tCFR RS: {crf_rs}")
+ #arcpy.AddMessage(f"\tCFR RST: {crf_rst}")
+
+ del crf_r, crf_rs, crf_rst
+
+ raster_mask_r = f"{dataset_code}_Raster_Mask"
+ raster_mask_rs = f"{dataset_code}_Raster_Mask_{date_code(project)}"
+ feature_service_title = f"{region} {season} Raster Mask {date_code(project)}".replace(' ',' ')
+ raster_mask_rst = f"{feature_service_title}"
+ del feature_service_title
+
+ #arcpy.AddMessage(f"\tProcessing: {raster_mask_r}")
+
+ __datasets_dict[raster_mask_r] = {"Dataset Service" : raster_mask_rs,
+ "Dataset Service Title" : raster_mask_rst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"Raster Mask is used for image production",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #arcpy.AddMessage(f"\tRaster_Mask R: {raster_mask_r}")
+ #arcpy.AddMessage(f"\tRaster_Mask RS: {raster_mask_rs}")
+ #arcpy.AddMessage(f"\tRaster_Mask RST: {raster_mask_rst}")
+
+ del raster_mask_r, raster_mask_rs, raster_mask_rst
+
+ region_fc = f"{dataset_code}_Region"
+ region_fcs = f"{dataset_code}_Region_{date_code(project)}"
+ feature_service_title = f"{region} {season} Region {date_code(project)}".replace(' ',' ')
+ region_fcst = f"{feature_service_title}"
+ del feature_service_title
+
+ #arcpy.AddMessage(f"\tProcessing: {region_fc}")
+
+ __datasets_dict[region_fc] = {"Dataset Service" : region_fcs,
+ "Dataset Service Title" : region_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"These files contain the spatial boundaries of the NOAA Fisheries Bottom-trawl surveys. This data set covers 8 regions of the United States: Northeast, Southeast, Gulf of Mexico, West Coast, Bering Sea, Aleutian Islands, Gulf of Alaska, and Hawai'i Islands.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #arcpy.AddMessage(f"\tRegion FC: {region_fc}")
+ #arcpy.AddMessage(f"\tRegion FCS: {region_fcs}")
+ #arcpy.AddMessage(f"\tRegion FCST: {region_fcst}")
+
+ del region_fc, region_fcs, region_fcst
+ del tags
+ else:
+ pass
+
+ if "Datasets" == dataset_code:
+
+ #arcpy.AddMessage(f"\tProcessing: {dataset_code}")
+
+ datasets_tb = dataset_code
+ datasets_tbs = f"{dataset_code}_{date_code(project)}"
+ datasets_tbst = f"{dataset_code} {date_code(project)}"
+
+ __datasets_dict[datasets_tb] = {"Dataset Service" : datasets_tbs,
+ "Dataset Service Title" : datasets_tbst,
+ "Tags" : "DisMAP, Datasets",
+ "Summary" : summary,
+ "Description" : "This table functions as a look-up table of vales",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #arcpy.AddMessage(f"{__datasets_dict[datasets_tb]}")
+ del datasets_tb, datasets_tbs, datasets_tbst
+ else:
+ pass
+
+ if "DisMAP_Regions" == dataset_code:
+
+ #arcpy.AddMessage(f"\tProcessing: {dataset_code}")
+
+ regions_fc = dataset_code
+ regions_fcs = f"{dataset_code}_{date_code(project)}"
+ regions_fcst = f"DisMAP Regions {date_code(project)}"
+
+ __datasets_dict[regions_fc] = {"Dataset Service" : regions_fcs,
+ "Dataset Service Title" : regions_fcst,
+ "Tags" : "DisMAP Regions",
+ "Summary" : summary,
+ "Description" : "These files contain the spatial boundaries of the NOAA Fisheries Bottom-trawl surveys. This data set covers 8 regions of the United States: Northeast, Southeast, Gulf of Mexico, West Coast, Eastern Bering Sea, Aleutian Islands, Gulf of Alaska, and Hawai'i Islands.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ del regions_fc, regions_fcs, regions_fcst
+
+ else:
+ pass
+ if "Indicators" == dataset_code:
+
+ #arcpy.AddMessage(f"\tProcessing: {dataset_code}")
+
+ indicators_tb = f"{dataset_code}"
+ indicators_tbs = f"{dataset_code}_{date_code(project)}"
+ indicators_tbst = f"{dataset_code} {date_code(project)}"
+
+ __datasets_dict[indicators_tb] = {"Dataset Service" : indicators_tbs,
+ "Dataset Service Title" : indicators_tbst,
+ "Tags" : "DisMAP, Indicators",
+ "Summary" : f"{summary}. This table provides the key metrics used to evaluate a species distribution shift. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Description" : f"These data contain the key distribution metrics of center of gravity, range limits, and depth for each species in the portal. This data set covers 8 regions of the United States: Northeast, Southeast, Gulf of Mexico, West Coast, Bering Sea, Aleutian Islands, Gulf of Alaska, and Hawai'i Islands.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ del indicators_tb, indicators_tbs, indicators_tbst
+ else:
+ pass
+
+ if "Species_Filter" == dataset_code:
+
+ #arcpy.AddMessage(f"\tProcessing: {dataset_code}")
+
+ species_filter_tb = dataset_code
+ species_filter_tbs = f"{dataset_code}_{date_code(project)}"
+ species_filter_tbst = f"Species Filter Table {date_code(project)}"
+
+ __datasets_dict[species_filter_tb] = {"Dataset Service" : species_filter_tbs,
+ "Dataset Service Title" : species_filter_tbst,
+ "Tags" : "DisMAP, Species Filter Table",
+ "Summary" : summary,
+ "Description" : "This table functions as a look-up table of values",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #arcpy.AddMessage(f"\tLayerSpeciesYearImageName T: {species_filter_tb}")
+ #arcpy.AddMessage(f"\tLayerSpeciesYearImageName TS: {species_filter_tbs}")
+ #arcpy.AddMessage(f"\tLayerSpeciesYearImageName TST: {species_filter_tbst}")
+
+ del species_filter_tb, species_filter_tbs, species_filter_tbst
+
+ else:
+ pass
+ if "DisMAP_Survey_Info" == dataset_code:
+
+ #arcpy.AddMessage(f"\tProcessing: {dataset_code}")
+
+ tb = dataset_code
+ tbs = f"{dataset_code}_{date_code(project)}"
+ tbst = f"DisMAP Survey Info Table {date_code(project)}"
+
+ __datasets_dict[tb] = {"Dataset Service" : tbs,
+ "Dataset Service Title" : tbst,
+ "Tags" : "DisMAP; DisMAP Survey Info Table",
+ "Summary" : summary,
+ "Description" : "This table functions as a look-up table of values",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #arcpy.AddMessage(f"\tLayerSpeciesYearImageName T: {tb}")
+ #arcpy.AddMessage(f"\tLayerSpeciesYearImageName TS: {tbs}")
+ #arcpy.AddMessage(f"\tLayerSpeciesYearImageName TST: {tbst}")
+
+ del tb, tbs, tbst
+
+ else:
+ pass
+ if "SpeciesPersistenceIndicatorPercentileBin" == dataset_code:
+
+ #arcpy.AddMessage(f"\tProcessing: {dataset_code} DisMAP_Survey_Info")
+
+ tb = dataset_code
+ tbs = f"{dataset_code}_{date_code(project)}"
+ tbst = f"Species Persistence Indicator Percentile Bin Table {date_code(project)}"
+
+ __datasets_dict[tb] = {"Dataset Service" : tbs,
+ "Dataset Service Title" : tbst,
+ "Tags" : "DisMAP; Species Persistence Indicator Percentile Bin Table",
+ "Summary" : summary,
+ "Description" : "This table functions as a look-up table of values",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #arcpy.AddMessage(f"\tLayerSpeciesYearImageName T: {tb}")
+ #arcpy.AddMessage(f"\tLayerSpeciesYearImageName TS: {tbs}")
+ #arcpy.AddMessage(f"\tLayerSpeciesYearImageName TST: {tbst}")
+
+ del tb, tbs, tbst
+
+ else:
+ pass
+ if "SpeciesPersistenceIndicatorTrend" == dataset_code:
+
+ #arcpy.AddMessage(f"\tProcessing: {dataset_code}")
+
+ tb = dataset_code
+ tbs = f"{dataset_code}_{date_code(project)}"
+ tbst = f"Species Persistence Indicator Trend Table {date_code(project)}"
+
+ __datasets_dict[tb] = {"Dataset Service" : tbs,
+ "Dataset Service Title" : tbst,
+ "Tags" : "DisMAP; Species Persistence Indicator Trend Table",
+ "Summary" : summary,
+ "Description" : "This table functions as a look-up table of values",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #arcpy.AddMessage(f"\tLayerSpeciesYearImageName T: {tb}")
+ #arcpy.AddMessage(f"\tLayerSpeciesYearImageName TS: {tbs}")
+ #arcpy.AddMessage(f"\tLayerSpeciesYearImageName TST: {tbst}")
+
+ del tb, tbs, tbst
+
+ else:
+ pass
+ #arcpy.AddMessage(f"\tProcessing: {dataset_code}")
+ #table = dataset_code
+ #table_s = f"{dataset_code}_{date_code(project)}"
+ #table_st = f"{table_s.replace('_',' ')} {date_code(project)}"
+ #arcpy.AddMessage(f"\tProcessing: {table_s}")
+ #__datasets_dict[table] = {"Dataset Service" : table_s,
+ # "Dataset Service Title" : table_st,
+ # "Tags" : f"DisMAP, {table}",
+ # "Summary" : summary,
+ # "Description" : "Unknown table",
+ # "Credits" : _credits,
+ # "Access Constraints" : access_constraints}
+ #arcpy.AddMessage(f"\tTable: {table}")
+ #arcpy.AddMessage(f"\tTable TS: {table_s}")
+ #arcpy.AddMessage(f"\tTable TST: {table_st}")
+ #del table, table_s, table_st
+ #arcpy.AddWarning(f"{dataset_code} is missing")
+
+ del summary
+ del point_feature_type, distribution_project_code, region, season
+ del dataset_code
+
+ del _credits, access_constraints
+
+ del dataset_codes
+ del project_folder, crf_folder
+ del project, project_gdb
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ traceback.print_exc()
+ sys.exit()
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except SystemExit:
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ if "tags" in locals().keys(): del tags
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return __datasets_dict
+ finally:
+ if "__datasets_dict" in locals().keys(): del __datasets_dict
+
+def date_code(version):
+ try:
+ from datetime import datetime
+ from time import strftime
+
+ _date_code = ""
+
+ if version.isdigit():
+ # The version value is 'YYYYMMDD' format (20230501)
+ # and is converted to 'Month Day and Year' (i.e. May 1 2023)
+ _date_code = datetime.strptime(version, "%Y%m%d").strftime("%B %#d %Y")
+ elif not version.isdigit():
+ # The version value is 'Month Day and Year' (i.e. May 1 2023)
+ # and is converted to 'YYYYMMDD' format (20230501)
+ _date_code = datetime.strptime(version, "%B %d %Y").strftime("%Y%m%d")
+ else:
+ _date_code = "error"
+ # Imports
+ del datetime, strftime
+ del version
+
+ import copy
+ __results = copy.deepcopy(_date_code)
+ del _date_code, copy
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return __results
+ finally:
+ if "__results" in locals().keys(): del __results
+
+def dTypesCSV(csv_data_folder="", table=""):
+ try:
+## if "IDW" in table:
+## table = "IDW_Data"
+## elif "GLMME" in table:
+## table = "GLMME_Data"
+## elif "GFDL" in table:
+## table = "GFDL_Data"
+## elif "Indicators" in table:
+## table = "Indicators"
+##
+## ## elif "DisMAP_Regions" in table:
+## ## table = "DisMAP_Regions"
+## ##
+## ## elif "Datasets" in table:
+## ## table = "Datasets"
+## ##
+## ## elif "Datasets" in table:
+## ## table = "Datasets"
+## ##
+## ## elif "Datasets" in table:
+## ## table = "Datasets"
+##
+## else:
+## table = table
+
+ _table_definitions = table_definitions(csv_data_folder, "")
+ for key in _table_definitions:
+ #arcpy.AddMessage(key, _table_definitions[key])
+ del key
+
+ fields = _table_definitions[table.replace(".csv", "")]
+
+ field_csv_dtypes = {k.replace(" ", "_") : "str" for k in _table_definitions[table.replace(".csv", "")]}
+
+ del fields, _table_definitions
+ del csv_data_folder, table
+
+ # Import
+ import copy
+ __results = copy.deepcopy(field_csv_dtypes)
+ del field_csv_dtypes, copy
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return __results
+ finally:
+ if "__results" in locals().keys(): del __results
+
+def dTypesGDB(csv_data_folder="", table=""):
+ try:
+## if "IDW" in table:
+## table = "IDW_Data"
+## elif "GLMME" in table:
+## table = "GLMME_Data"
+## elif "GFDL" in table:
+## table = "GFDL_Data"
+## else:
+## pass
+
+ _field_definitions = field_definitions(csv_data_folder, "")
+ _table_definitions = table_definitions(csv_data_folder, "")
+
+ fields = _table_definitions[table.replace(".csv", "")]
+
+ field_gdb_dtypes = []
+ for field in fields:
+ field_definition = _field_definitions[field]
+ # arcpy.AddMessage(field_definition["field_type"])
+ # fd = field_definition[:-2]
+ # del fd[2], field_definition
+ if field_definition["field_type"] == "TEXT" or field_definition["field_type"] == "String":
+ field_dtype = f"U{field_definition['field_length']}"
+ elif field_definition["field_type"] == "SHORT" or field_definition["field_type"] == "Integer":
+ # np.dtype('u4') == dtype('uint32')
+ #field_dtype = f"U4"
+ field_dtype = f"u4"
+ elif field_definition["field_type"] == "DOUBLE" or field_definition["field_type"] == "Double":
+ # np.dtype('d') == dtype('float64'), np.dtype('f') == dtype('float32'), np.dtype('f8') == dtype('float64')
+ field_dtype = f"d"
+ elif field_definition["field_type"] == "DATE" or field_definition["field_type"] == "Date":
+ field_dtype = f"M8[us]"
+ else:
+ field_dtype = ""
+ field_gdb_dtypes.append((f"{field}", f"{field_dtype}"))
+ del field_definition, field, field_dtype
+ del fields
+
+ del _field_definitions, _table_definitions
+
+ del table, csv_data_folder
+
+ import copy
+ __results = copy.deepcopy(field_gdb_dtypes)
+ del field_gdb_dtypes, copy
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return __results
+ finally:
+ if "__results" in locals().keys(): del __results
+
+def export_metadata(csv_data_folder="", in_table=""):
+ # Deprecated
+ try:
+ table = os.path.basename(in_table)
+ ws = os.path.dirname(in_table)
+ project_folder = os.path.dirname(ws)
+ # csv_data_folder = os.path.join(project_folder, "CSV_Data")
+ project = os.path.basename(project_folder)
+ version = project[7:]
+ del in_table, project_folder, project, version, csv_data_folder
+
+ ws_type = arcpy.Describe(ws).workspaceType
+
+ if ws_type == "LocalDatabase":
+ os.chdir(os.path.dirname(ws))
+ elif ws_type == "FileSystem":
+ os.chdir(ws)
+ elif ws_type == "RemoteDatabase":
+ pass
+ else:
+ pass
+ del ws_type
+
+ cwd = os.getcwd()
+
+ # ArcPy Environments
+ # Set the overwriteOutput to True
+ arcpy.env.overwriteOutput = True
+ # Use all of the cores on the machine.
+ arcpy.env.parallelProcessingFactor = "100%"
+ # Set the scratch workspace
+ arcpy.env.scratchWorkspace = rf"Scratch\\scratch.gdb"
+ # Set the workspace to the workspace
+ arcpy.env.workspace = ws
+
+ # Set Log Metadata to False in order to not record all geoprocessing
+ # steps in metadata
+ arcpy.SetLogMetadata(True)
+
+ arcpy.AddMessage(f"Dataset: {table}")
+
+ # Process: Export Metadata
+ arcpy.AddMessage(f"\tExporting Metadata Object for Dataset: {table}")
+
+ # https://pro.arcgis.com/en/pro-app/latest/arcpy/metadata/metadata-class.htm
+ from arcpy import metadata as md
+
+ dataset_md = md.Metadata(table)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ dataset_md.reload()
+
+ arcpy.AddMessage(f"\t\tStep 1: Saving the metadata file for: {table} as an EXACT_COPY")
+ out_xml = os.path.join(cwd, "Export Metadata", f"{table} Step 1 EXACT_COPY.xml")
+ dataset_md.saveAsXML(out_xml, "EXACT_COPY")
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ arcpy.AddMessage(f"\t\tStep 2: Saving the metadata file for: {table} as a TEMPLATE")
+ out_xml = os.path.join(cwd, "Export Metadata", f"{table} Step 2 TEMPLATE.xml")
+ dataset_md.saveAsXML(out_xml, "TEMPLATE")
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md, md
+
+ # Declared variable
+ del ws, table, cwd
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return __results
+ finally:
+ if "__results" in locals().keys(): del __results
+
+def field_definitions(csv_data_folder="", field=""):
+ try:
+ import json, copy
+
+ # Read a File
+ with open(os.path.join(csv_data_folder, "field_definitions.json"), "r") as json_file:
+ try:
+ _field_definitions = json.load(json_file)
+ except: # noqa: E722
+ arcpy.AddError(f"CSV Data: {csv_data_folder}")
+ arcpy.AddError(f"Field Defs: {json_file.name}")
+ sys.exit(0)
+ del json_file
+
+ if not field: # if ""
+ # Returns a dictionaty of field definitions
+ __results = copy.deepcopy(_field_definitions)
+ elif field: # If a field was passed, then return
+ if field in _field_definitions:
+ __results = copy.deepcopy(_field_definitions[field])
+ else:
+ __results = False
+ else:
+ pass
+ del _field_definitions
+ # Imports copy
+ del json, copy
+ # Function parameters
+ del csv_data_folder, field
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return __results
+ finally:
+ if "__results" in locals().keys(): del __results
+
+def get_encoding_index_col(csv_file):
+ import chardet
+ import pandas as pd
+ from pathlib import Path
+ # Open the file in binary mode
+ with open(csv_file, 'rb') as f:
+ # Read the file's content
+ data = f.read()
+ # Detect the encoding using chardet.detect()
+ encoding_result = chardet.detect(data)
+ # Retrieve the encoding information
+ encoding = encoding_result['encoding']
+ del f, data, encoding_result
+ # Print the detected encoding
+ #arcpy.AddMessage("Detected Encoding:", encoding)
+
+ path = Path(csv_file)
+ path.write_text(path.read_text(encoding=encoding), encoding="utf8")
+ del path
+
+ dtypes = {}
+ # Read the CSV file into a DataFrame
+ df = pd.read_csv(csv_file, encoding = encoding, delimiter = ",",)
+ # Analyze the data types and lengths
+ for column in df.columns: dtypes[column] = df[column].dtype; del column
+ first_column = list(dtypes.keys())[0]
+ index_column = 0 if first_column == "Unnamed: 0" else None
+ # Declared Variables
+ del df, dtypes, first_column
+
+ # Import
+ del chardet, pd, Path
+
+ return encoding, index_column
+
+def get_transformation(gsr_wkt="", psr_wkt=""):
+ gsr = arcpy.SpatialReference()
+ gsr.loadFromString(gsr_wkt)
+ # arcpy.AddMessage(f"\tGSR: {gsr.name}")
+
+ psr = arcpy.SpatialReference()
+ psr.loadFromString(psr_wkt)
+ # arcpy.AddMessage(f"\tPSR: {psr.name}")
+
+ transformslist = arcpy.ListTransformations(gsr, psr)
+ transform = transformslist[0] if transformslist else ""
+ # arcpy.AddMessage(f"\t\tTransformation: {transform}\n")
+ # for transform in transformslist:
+ # arcpy.AddMessage(f"\t{transform}")
+
+ del gsr_wkt, psr_wkt
+
+ return transform
+
+def import_metadata(csv_data_folder="", dataset=""):
+ try:
+ #arcpy.AddMessage(csv_data_folder)
+ #arcpy.AddMessage(dataset)
+ if len(csv_data_folder) == 0 or len(dataset) == 0:
+ arcpy.AddError(f"{os.path.basename(csv_data_folder)} or {os.path.basename(dataset)} is empty")
+ raise SystemExit
+ else:
+ pass
+ # Import
+ from arcpy import metadata as md
+
+ #arcpy.AddMessage(f"{csv_data_folder}")
+ #arcpy.AddMessage(f"{dataset}")
+
+ dataset_name = os.path.basename(dataset)
+ project_gdb = os.path.dirname(dataset)
+
+ #arcpy.AddMessage(f"{dataset_name}")
+ #arcpy.AddMessage(f"{project_gdb}")
+
+ #if dataset_name.endswith(".crf"):
+ # dataset_name = dataset_name.replace(".crf", "_CRF")
+ # #_project = os.path.basename(os.path.dirname(os.path.dirname(dataset)))
+ # #project_gdb = rf"{os.path.dirname(os.path.dirname(dataset))}\{_project}.gdb"
+ # #del _project
+ #else:
+ # pass
+ # #project_gdb = os.path.dirname(dataset)
+
+ #arcpy.AddMessage(f"{'-' * 10}")
+ #arcpy.AddError(project_gdb)
+ #arcpy.AddError(csv_data_folder)
+ #arcpy.AddError(dataset)
+ #arcpy.AddMessage(f"{'-' * 10}")
+ #raise SystemExit
+
+ # Test if passed workspace exists, if not raise Exception
+ if not arcpy.Exists(project_gdb):
+ arcpy.AddError(f"{os.path.basename(project_gdb)} is missing!!")
+ sys.exit()
+ raise SystemExit
+
+ #arcpy.AddMessage(csv_data_folder)
+
+ project_folder = os.path.dirname(csv_data_folder)
+ metadata_folder = rf"{project_folder}\Metadata_Export"
+
+ # ArcPy Environments
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = os.path.join(os.path.dirname(project_gdb), "Scratch\\scratch.gdb")
+ arcpy.SetLogMetadata(True)
+
+ try:
+ arcpy.AddMessage("Create Metadata Dictionary")
+ metadata_dictionary = dataset_title_dict(project_gdb)
+ if metadata_dictionary:
+ pass
+ #for key in metadata_dictionary:
+ # arcpy.AddMessage(f"{key}, {metadata_dictionary[key]}")
+ # del key
+ elif not metadata_dictionary:
+ arcpy.AddWarning("Metadata Dictionary is empty")
+ else:
+ pass
+ except: # noqa: E722
+ traceback.print_exc()
+ raise SystemExit
+
+ arcpy.AddMessage(f"Metadata for: {dataset_name} dataset")
+
+ # arcpy.AddMessage(f"\tDataset Service: {datasets_dict[dataset]['Dataset Service']}")
+ # arcpy.AddMessage(f"\tDataset Service Title: {datasets_dict[dataset]['Dataset Service Title']}")
+
+ # Assign the Metadata object's content to a target item
+ dataset_md = md.Metadata(dataset)
+ #resource_citation_contacts = rf"{metadata_folder}\resource_citation_contacts.xml"
+ resource_citation_contacts = rf"{metadata_folder}\contacts.xml"
+ #arcpy.AddMessage(resource_citation_contacts)
+ dataset_md.importMetadata(resource_citation_contacts)
+ dataset_md.save()
+ dataset_md.synchronize("OVERWRITE")
+ dataset_md.save()
+ dataset_md.synchronize('ALWAYS')
+ dataset_md.save()
+ del resource_citation_contacts #, poc_template_md
+ # Create a new Metadata object and add some content to it
+ # https://pro.arcgis.com/en/pro-app/latest/arcpy/metadata/metadata-class.htm
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+ dataset_md.synchronize('ALWAYS')
+ dataset_md.save()
+ dataset_md.reload()
+ out_xml = rf"{metadata_folder}\{dataset_md.title}.xml"
+ #dataset_md.saveAsXML(out_xml, "REMOVE_ALL_SENSITIVE_INFO")
+ #dataset_md.saveAsXML(out_xml, "REMOVE_MACHINE_NAMES")
+ dataset_md.saveAsXML(out_xml)
+
+ parse_xml_file_format_and_save(csv_data_folder=csv_data_folder, xml_file=out_xml, sort=True)
+ del out_xml
+
+ del dataset_md
+
+ # Import
+ del md
+ # Declared variable
+ del metadata_dictionary
+ del dataset_name, project_gdb, metadata_folder, project_folder
+
+ # Function parameters
+ del dataset, csv_data_folder
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ traceback.print_exc()
+ sys.exit()
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except SystemExit:
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+ finally:
+ pass
+
+def metadata_dictionary_json(csv_data_folder="", dataset_name=""):
+ try:
+ import json
+
+ # Read a File
+ with open(rf"{csv_data_folder}\metadata_dictionary.json", "r") as json_file:
+ metadata_dictionary = json.load(json_file)
+ del json_file, json, csv_data_folder
+
+ if not dataset_name:
+ __results = metadata_dictionary
+ elif dataset_name:
+ __results = metadata_dictionary[dataset_name]
+ else:
+ __results = None
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except SystemExit:
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return __results
+ finally:
+ if "__results" in locals().keys(): del __results
+
+def pretty_format_xml_file(metadata=""):
+ try:
+ # xml.etree.ElementTree Imports
+ import xml.etree.ElementTree as ET
+
+ #arcpy.AddMessage(f"###--->>> Converting metadata file: {os.path.basename(metadata)} to pretty format")
+ if os.path.isfile(metadata):
+ tree = ET.ElementTree(file=metadata)
+ root = tree.getroot()
+ tree = ET.ElementTree(root)
+ ET.indent(tree, space="\t", level=0)
+ xmlstr = ET.tostring(root, encoding='UTF-8').decode("UTF-8")
+ xmlstr = xmlstr.replace(' Sync="TRUE">\n', ' Sync="TRUE">')
+ xmlstr = xmlstr.replace(' Sync="FALSE">\n', ' Sync="FALSE">')
+ xmlstr = xmlstr.replace(' value="eng">\n', ' value="eng">')
+ xmlstr = xmlstr.replace(' value="US">\n', ' value="US">')
+ xmlstr = xmlstr.replace(' value="001">\n', ' value="001">')
+ xmlstr = xmlstr.replace(' value="002">\n', ' value="002">')
+ xmlstr = xmlstr.replace(' value="003">\n', ' value="003">')
+ xmlstr = xmlstr.replace(' value="004">\n', ' value="004">')
+ xmlstr = xmlstr.replace(' value="005">\n', ' value="005">')
+ xmlstr = xmlstr.replace(' value="006">\n', ' value="006">')
+ xmlstr = xmlstr.replace(' value="007">\n', ' value="007">')
+ xmlstr = xmlstr.replace(' value="008">\n', ' value="008">')
+ xmlstr = xmlstr.replace(' value="009">\n', ' value="009">')
+ xmlstr = xmlstr.replace(' value="010">\n', ' value="010">')
+ xmlstr = xmlstr.replace(' value="011">\n', ' value="011">')
+ xmlstr = xmlstr.replace(' value="012">\n', ' value="012">')
+ xmlstr = xmlstr.replace(' value="013">\n', ' value="013">')
+ xmlstr = xmlstr.replace(' value="014">\n', ' value="014">')
+ xmlstr = xmlstr.replace(' value="015">\n', ' value="015">')
+ xmlstr = xmlstr.replace(' code="0">\n', ' code="0">')
+ xmlstr = xmlstr.replace('\n', '',)
+ xmlstr = xmlstr.replace('\n', '')
+
+ xmlstr = xmlstr.replace("", '')
+ xmlstr = xmlstr.replace("", '')
+ xmlstr = xmlstr.replace("", '')
+
+ xmlstr = xmlstr.replace('Sync="FALSE"', 'Sync="TRUE"')
+
+ try:
+ with open(metadata, "w") as f:
+ f.write(xmlstr)
+ del f
+ except: # noqa: E722
+ arcpy.AddError(f"The metadata file: {os.path.basename(metadata)} can not be overwritten!!")
+ del xmlstr, tree, root
+ else:
+ arcpy.AddWarning(f"\t###--->>> {os.path.basename(metadata)} is missing!! <<<---###")
+ arcpy.AddWarning(f"\t###--->>> {metadata} <<<---###")
+ # Declared variable
+ del metadata, ET
+
+ except KeyboardInterrupt:
+ raise SystemExit
+ except arcpy.ExecuteWarning:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ raise SystemExit
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ raise SystemExit
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ raise SystemExit
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ raise SystemExit
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def pretty_format_xml_files(metadata_folder=""):
+ try:
+ arcpy.env.overwriteOutput = True
+ arcpy.env.workspace = metadata_folder
+
+ xml_files = [rf"{metadata_folder}\{xml}" for xml in arcpy.ListFiles("*.xml")]
+ for xml_file in xml_files:
+ arcpy.AddMessage(os.path.basename(xml_file))
+ pretty_format_xml_file(xml_file)
+ del xml_file
+
+ # Declared Variables declared in the function
+ del xml_files
+
+ # Function paramters
+ del metadata_folder
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return __results
+ finally:
+ if "__results" in locals().keys(): del __results
+
+def table_definitions(csv_data_folder="", dataset_name=""):
+ try:
+ #arcpy.AddMessage(dataset_name)
+ #arcpy.AddMessage(csv_data_folder)
+ import json, copy
+
+ # Read a File
+ with open(os.path.join(csv_data_folder, "table_definitions.json"), "r") as json_file:
+ _table_definitions = json.load(json_file)
+ del json_file
+
+ if not dataset_name or dataset_name == "":
+ # Return a dictionary of all values
+ __results = copy.deepcopy(_table_definitions)
+ elif dataset_name:
+ arcpy.AddMessage(f"IN: {dataset_name}")
+## if "_IDW" in dataset_name:
+## dataset_name = "IDW_Data"
+## elif "_GLMME" in dataset_name:
+## dataset_name = "GLMME_Data"
+## elif "_GFDL" in dataset_name:
+## dataset_name = "GFDL_Data"
+## else:
+## dataset_name = dataset_name
+## # arcpy.AddMessage(f"OUT: {dataset_name}")
+ __results = copy.deepcopy(_table_definitions[dataset_name])
+ else:
+ arcpy.AddError("something wrong")
+ raise SystemExit
+
+ # Declared Variables created in function
+ del _table_definitions
+ # Import
+ del json, copy
+ # Function parameters
+ del csv_data_folder, dataset_name
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return __results
+ finally:
+ if "__results" in locals().keys(): del __results
+
+
+# #
+# Function: unique_years
+# Gets the unique years in a table
+# @param string table: The name of the layer
+# @return array: a sorted year array so we can go in order.
+# #
+def unique_values(table, field):
+ #arcpy.AddMessage(table)
+ with arcpy.da.SearchCursor(table, [field]) as cursor:
+ return sorted({row[0] for row in cursor}) # Uses list comprehension
+
+# #
+# Function: unique_years
+# Gets the unique years in a table
+# @param string table: The name of the layer
+# @return array: a sorted year array so we can go in order.
+# #
+def unique_years(table):
+ #arcpy.AddMessage(table)
+ arcpy.management.SelectLayerByAttribute( table, "CLEAR_SELECTION" )
+ arcpy.management.SelectLayerByAttribute( table, "NEW_SELECTION", "Year IS NOT NULL")
+ with arcpy.da.SearchCursor(table, ["Year"]) as cursor:
+ return sorted({row[0] for row in cursor})
+
+def test_bed_1(project_gdb=""):
+## raise SystemExit(traceback.print_exc())
+## finally:
+## if "results" in locals().keys(): del results
+ try:
+
+ base_project_folder = os.path.dirname(os.path.dirname(__file__))
+
+ project_gdb = rf"{base_project_folder}\{project}\{project}.gdb"
+
+ del base_project_folder
+
+ # Test if passed workspace exists, if not raise SystemExit
+ if not arcpy.Exists(fr"{project_gdb}"):
+ raise SystemExit(f"{os.path.basename(project_gdb)} is missing!!")
+ else:
+ pass
+
+ ## # Write to File
+ ## with open('fieldDefinitions.json', 'w') as json_file:
+ ## json.dump(fieldDefinitions(), json_file, indent=4)
+ ## del json_file
+ ##
+ ## # Write to File
+ ## with open('tableDefinitions.json', 'w') as json_file:
+ ## json.dump(tableDefinitions(), json_file, indent=4)
+ ## del json_file
+
+ ## # Read a File
+ ## with open('fieldDefinitions.json', 'r') as json_file:
+ ## field_definitions = json.load(json_file)
+ ## for field_definition in field_definitions:
+ ## arcpy.AddMessage(f'Field: {field_definition}')
+ ## for key in field_definitions[field_definition]:
+ ## arcpy.AddMessage(f"\t{key:<17} : {field_definitions[field_definition][key]}")
+ ## del field_definition
+ ## #del _field_definitions
+ ## del json_file
+
+ ## # Read a File
+ ## with open('tableDefinitions.json', 'r') as json_file:
+ ## table_definitions = json.load(json_file)
+ ## for table_definition in table_definitions:
+ ## arcpy.AddMessage(f"Table: {table_definition}")
+ ## table_fields = table_definitions[table_definition]
+ ## for table_field in table_fields:
+ ## #arcpy.AddMessage(f"\tField Name: {field}")
+ ## arcpy.AddMessage(f"\tField Name: {table_field:<17}")
+ ## for key in field_definitions[table_field]:
+ ## arcpy.AddMessage(f"\t\t{key:<17} : {field_definitions[table_field][key]}")
+ ## del key
+ ## del table_field
+ ## del table_fields
+ ## del table_definition
+ ## del _table_definitions
+ ## del json_file
+ ## del _field_definitions
+ ##
+ ## # Read a File
+ ## with open('fieldDefinitions.json', 'r') as json_file:
+ ## field_definitions = json.load(json_file)
+ ## del json_file
+ ##
+ ## # Read a File
+ ## with open('tableDefinitions.json', 'r') as json_file:
+ ## table_definitions = json.load(json_file)
+ ## del json_file
+ ##
+ ## # arcpy.AddMessage(field_definitions["Species"]["field_name"])
+ ## arcpy.AddMessage(table_definitions["Datasets"])
+ ##
+ ## del _field_definitions
+ ## del _table_definitions
+
+ ## project_name = os.path.basename(os.path.dirname(csv_data_folder))
+ ## arcpy.AddMessage(project_name)
+ ## arcpy.AddMessage(date_code(date_code(project_name)))
+ ## del project_name
+
+ # # # ###--->>>
+
+ ## tables = ["Datasets", "DisMAP_Regions", "AI_IDW"]
+ ## for table in tables:
+ ## field_csv_dtypes = dTypesCSV(csv_data_folder, table)
+ ##
+ ## arcpy.AddMessage(table)
+ ## for field_csv_dtype in field_csv_dtypes:
+ ## arcpy.AddMessage(f"\t{field_csv_dtype}"); del field_csv_dtype
+ ## del field_csv_dtypes
+ ##
+ ## field_gdb_dtypes = dTypesGDB(csv_data_folder, table)
+ ## for field_gdb_dtype in field_gdb_dtypes:
+ ## arcpy.AddMessage(f"\t{field_gdb_dtype}"); del field_gdb_dtype
+ ## del field_gdb_dtypes
+ ##
+ ## del table
+ ## del tables
+
+ ## _field_definitions = fieldDefinitions(csv_data_folder, "")
+ ## for field in field_definitions:
+ ## arcpy.AddMessage(field)
+ ## arcpy.AddMessage(f"\t{field_definitions[field]}")
+ ## del field
+ ## del _field_definitions
+
+## # First Test
+## _table_definitions = table_definitions(csv_data_folder, "")
+## #arcpy.AddMessage(table_definitions)
+## for table in table_definitions:
+## arcpy.AddMessage(table)
+## arcpy.AddMessage(f"\t{table_definitions[table]}");
+## del table
+## del _table_definitions
+
+ ## # Second Test
+ ## table_definition = tableDefinitions(csv_data_folder, "")
+ ## arcpy.AddMessage(table_definition); del table_definition
+ ##
+ ## #table = "DataSets"
+ ## #arcpy.AddMessage(table_definitions[table])
+ ## #arcpy.AddMessage(f"\t"); del table
+
+ ## # Third Test
+ ## tables = ["Datasets", "DisMAP_Regions", "AI_IDW",]
+ ## for table in tables:
+ ## table_definition = tableDefinitions(csv_data_folder, table)
+ ## arcpy.AddMessage(f"Table: {table}:\n\t{', '.join(table_definition)}"); del table_definition
+ ## del table
+ ## del tables
+
+ ## field = "DMSLat"
+ ## _field_definitions = fieldDefinitions(csv_data_folder, field)
+ ## if field_definitions:
+ ## for field in field_definitions:
+ ## arcpy.AddMessage(field)
+ ## #arcpy.AddMessage(f"\t{field_definitions[field]}")
+ ## del field
+ ## else:
+ ## arcpy.AddMessage(f"Field: {field} is not in fieldDefinitions")
+ ## del _field_definitions, field
+
+ ## import dev_dismap_tools
+ ##
+ ## csv_file=r"{os.environ['USERPROFILE']}\Documents\ArcGIS\Projects\DisMAP-ArcGIS-Analysis\April 1 2023\CSV_Data\Datasets.csv"
+ ## #csv_file=r"{os.environ['USERPROFILE']}\Documents\ArcGIS\Projects\DisMAP-ArcGIS-Analysis\April 1 2023\CSV_Data\Species_Filter.csv"
+ ##
+ ## table = os.path.basename(csv_file).replace(".csv", "")
+ ## csv_data_folder = os.path.dirname(csv_file)
+ ## project_folder = os.path.dirname(csv_data_folder)
+ ## project = os.path.basename(project_folder)
+ ## #version = project[7:]
+ ##
+ ## arcpy.AddMessage(f"\tProject GDB: DisMAP {project}.gdb")
+ ## gdb = os.path.join(project_folder, f"{project}.gdb")
+ ##
+ ## arcpy.env.overwriteOutput = True
+ ## arcpy.env.parallelProcessingFactor = "100%"
+ ## arcpy.env.scratchWorkspace = rf"Scratch\\scratch.gdb"
+ ## arcpy.env.workspace = gdb
+ ## arcpy.SetLogMetadata(True)
+ ##
+ ## field_csv_dtypes = dTypesCSV(csv_data_folder, table)
+ ## field_gdb_dtypes = dTypesGDB(csv_data_folder, table)
+ ##
+ ## arcpy.AddMessage(f"\tCreating Table: {table}")
+ ## arcpy.management.CreateTable(gdb, f"{table}", "", "", table.replace("_", " "))
+ ##
+ ## in_table = os.path.join(gdb, table)
+ ##
+ ## #add_fields(in_table)
+ ## #alterFields(in_table)
+ ## basic_metadata(in_table)
+ ## export_metadata(in_table)
+ ##
+ ## del csv_file, table, project_folder, project, gdb
+ ## del field_csv_dtypes, field_gdb_dtypes
+ ## del dev_dismap_tools, in_table
+
+## # ###--->>>
+## dataset = r'{os.environ['USERPROFILE']}\Documents\ArcGIS\Projects\DisMAP-ArcGIS-Analysis\May 1 2024\CRFs\NBS_IDW.crf'
+## import_metadata(csv_data_folder, dataset)
+## # ###--->>>
+
+## # ###--->>>
+## # Update Dataset Metadata Dictionary
+## datasets_dict = dataset_title_dict(project_gdb)
+##
+## # Write to File
+## with open(rf"{csv_data_folder}\metadata_dictionary.json", "w") as json_file:
+## json.dump(datasets_dict, json_file, indent=4)
+## del json_file
+##
+## # Read a File
+## with open(rf"{csv_data_folder}\metadata_dictionary.json", "r") as json_file:
+## datasets_dict = json.load(json_file)
+## del json_file
+##
+## for dataset in sorted(datasets_dict):
+## arcpy.AddMessage(f"Table: {dataset}")
+## arcpy.AddMessage(f"\tDataset Service: {datasets_dict[dataset]['Dataset Service']}")
+## arcpy.AddMessage(f"\tDataset Service Title: {datasets_dict[dataset]['Dataset Service Title']}")
+## arcpy.AddMessage(F"\tTags: {datasets_dict[dataset]['Tags']}")
+## arcpy.AddMessage(F"\tSummary: {datasets_dict[dataset]['Summary']}")
+## arcpy.AddMessage(F"\tDescription: {datasets_dict[dataset]['Description']}")
+## #arcpy.AddMessage(F"\tCredits: {datasets_dict[dataset]['Credits']}")
+## #arcpy.AddMessage(F"\tAccess Constraints: {datasets_dict[dataset]['Access Constraints']}")
+## arcpy.AddMessage(f"{'-'*80}")
+##
+## # dataset_md.synchronize("NOT_CREATED", 0)
+## # dataset_md.title = metadata_dictionary[table]["md_title"]
+## # dataset_md.tags = metadata_dictionary[table]["md_tags"]
+## # dataset_md.summary = metadata_dictionary[table]["md_summary"]
+## # dataset_md.description = metadata_dictionary[table]["md_description"]
+## # dataset_md.credits = metadata_dictionary[table]["md_credits"]
+## # dataset_md.accessConstraints = metadata_dictionary[table]["md_access_constraints"]
+##
+## del dataset
+## del datasets_dict
+##
+## # ###--->>>
+
+## # ###--->>>
+## dataset = r'{os.environ['USERPROFILE']}\Documents\ArcGIS\Projects\DisMAP-ArcGIS-Analysis\May 1 2024\May 1 2024.gdb\Datasets'
+## import_metadata(csv_data_folder, dataset)
+## del dataset
+
+## # ###--->>>
+
+## # ###--->>>
+## from arcpy import metadata as md
+##
+## arcpy.env.overwriteOutput = True
+## arcpy.env.workspace = rf"{os.environ['USERPROFILE']}\Documents\ArcGIS\Projects\DisMAP-ArcGIS-Analysis\DisMAP.gdb"
+##
+## arcpy.management.CreateFeatureclass(arcpy.env.workspace, "temp_fc", "POLYGON")
+## dataset = rf"{arcpy.env.workspace}\temp_fc"
+## dataset_xml = r'{os.environ['USERPROFILE']}\Documents\ArcGIS\Projects\DisMAP-ArcGIS-Analysis\May 1 2024\ArcGIS Metadata\temp_fc.xml'
+## new_md_xml = r'{os.environ['USERPROFILE']}\Documents\ArcGIS\Projects\DisMAP-ArcGIS-Analysis\May 1 2024\ArcGIS Metadata\new_md.xml'
+##
+## dataset_md = md.Metadata(dataset)
+## dataset_md.saveAsXML(dataset_xml.replace(".xml", " no sync.xml"))
+## pretty_format_xml_file(dataset_xml.replace(".xml", " no sync.xml"))
+##
+## new_md.saveAsXML(new_md_xml)
+## pretty_format_xml_file(new_md_xml)
+## #if not dataset_md.isReadOnly:
+## dataset_md.copy(new_md)
+## dataset_md.synchronize("ACCESSED")
+## dataset_md.save()
+## dataset_md.reload()
+## del new_md, new_md_xml
+##
+## dataset_md.saveAsXML(dataset_xml.replace(".xml", " new data.xml"))
+## pretty_format_xml_file(dataset_xml.replace(".xml", " new data.xml"))
+##
+## del dataset_md
+## del dataset, dataset_xml
+## del md
+
+## # Path to new metadata XML file
+## new_dataset_path = r'{os.environ['USERPROFILE']}\Documents\ArcGIS\Projects\DisMAP-ArcGIS-Analysis\May 1 2024\ArcGIS Metadata\new_dataset.xml'
+## poc_template_path = r'{os.environ['USERPROFILE']}\Documents\ArcGIS\Projects\DisMAP-ArcGIS-Analysis\May 1 2024\ArcGIS Metadata\poc_template.xml'
+##
+## # Create a new Metadata object, add some content to it, and then save
+## new_md = md.Metadata()
+## new_md.title = 'My Title'
+## new_md.tags = 'Tag1, Tag2'
+## new_md.summary = 'My Summary'
+## new_md.description = 'My Description'
+## new_md.credits = 'My Credits'
+## new_md.accessConstraints = 'My Access Constraints'
+## #new_md.saveAsXML(new_dataset_path, "TEMPLATE")
+## #dataset_md.saveAsXML(new_dataset_path)
+## #del dataset_md
+##
+## # Create the dataset metadata object
+## dataset_md = md.Metadata(dataset)
+## dataset_md.synchronize("ALWAYS")
+## dataset_md.save()
+## dataset_md.reload()
+## dataset_md.saveAsXML(dataset_xml)
+## if not dataset_md.isReadOnly:
+## dataset_md.copy(new_md)
+## dataset_md.synchronize("NOT_CREATED")
+## dataset_md.save()
+## dataset_md.reload()
+## dataset_md.synchronize("NOT_CREATED")
+## dataset_md.copy(new_md)
+## dataset_md.save()
+## dataset_md.reload()
+## # Create the POC metadata object
+## #poc_template_md = md.Metadata(poc_template_path)
+##
+## #Copy the POC metadata to the dataset metadata object
+## # Copy Start
+## #dataset_md.synchronize("ACCESSED", 0) # With copy and ACCESSED the POC overwites all metadata content
+## #dataset_md.synchronize("ALWAYS") # With copy and ALWAYS the POC overwites all metadata content
+## #dataset_md.synchronize("CREATED") # With copy and CREATED the POC overwites all metadata content
+## #dataset_md.synchronize("NOT_CREATED") # With copy and NOT_CREATED the POC overwites all metadata content
+## #dataset_md.synchronize("OVERWRITE") # With copy and OVERWRITE the POC overwites all metadata content
+## #dataset_md.synchronize("SELECTIVE") # With copy and SELECTIVE the POC overwites all metadata content
+## # Copy End
+## # Import Start
+## #dataset_md.synchronize("ACCESSED", 0) # With import and ACCESSED the POC is mergeed, but with only the XML flag
+## #dataset_md.synchronize("ALWAYS") # With import and ALWAYS the POC is mergeed, but with only the XML flag
+## #dataset_md.synchronize("CREATED") # With import and CREATED the POC is mergeed, but with only the XML flag
+## #dataset_md.synchronize("NOT_CREATED") # With import and NOT_CREATED POC is mergeed, but with only the XML flag
+## #dataset_md.synchronize("OVERWRITE") # With import and OVERWRITE the POC is mergeed, but with only the XML flag
+## #dataset_md.synchronize("SELECTIVE") # With import and SELECTIVE the POC is mergeed, but with only the XML flag
+## # Import End
+## #if not dataset_md.isReadOnly:
+## # dataset_md.copy(poc_template_md)
+## #dataset_md.importMetadata(poc_template_md)
+## # Copy Start
+## #dataset_md.synchronize("ACCESSED", 0) # With copy and ACCESSED the POC overwites all metadata content
+## #dataset_md.synchronize("ALWAYS") # With copy and ALWAYS the POC overwites all metadata content
+## #dataset_md.synchronize("CREATED") # With copy and CREATED the POC overwites all metadata content
+## #dataset_md.synchronize("NOT_CREATED") # With copy and NOT_CREATED the POC overwites all metadata content
+## #dataset_md.synchronize("OVERWRITE") # With copy and OVERWRITE the POC overwites all metadata content
+## #dataset_md.synchronize("SELECTIVE") # With copy and SELECTIVE the POC overwites all metadata content
+## # Copy End
+## # Import Start
+## #dataset_md.synchronize("ACCESSED", 0) # With import and ACCESSED the POC is mergeed, but with only the XML flag
+## #dataset_md.synchronize("ALWAYS") # With import and ALWAYS the POC is mergeed, but with only the XML flag
+## #dataset_md.synchronize("CREATED") # With import and CREATED the POC is mergeed, but with only the XML flag
+## #dataset_md.synchronize("NOT_CREATED") # With import and NOT_CREATED POC is mergeed, but with only the XML flag
+## #dataset_md.synchronize("OVERWRITE") # With import and OVERWRITE the POC is mergeed, but with only the XML flag
+## #dataset_md.synchronize("SELECTIVE") # With import and SELECTIVE the POC is mergeed, but with only the XML flag
+## # Import End
+## #dataset_md.save()
+## #dataset_md.reload()
+## #out_xml = new_dataset_path.replace(".xml", " copy do nothing .xml")
+## dataset_md.saveAsXML(new_dataset_path)
+## #dataset_md.saveAsXML(new_dataset_path.replace(".xml", " import SELECTIVE after.xml"))
+## del dataset_md, poc_template_path
+## #del poc_template_md
+##
+## pretty_format_xml_file(new_dataset_path)
+## pretty_format_xml_file(dataset_xml)
+## #del out_xml
+##
+## del new_dataset_path, dataset_xml
+## del md
+## del dataset, new_md
+##
+## # ###--->>>
+
+## # ###--->>> COMPARE TWO XML DOCUMENTS
+## # This requires use of the clone ArcGIS Pro arcpy.env.
+## # https://buildmedia.readthedocs.org/media/pdf/xmldiff/latest/xmldiff.pdf
+## table = "DisMAP_Regions"
+## project_folder = os.path.dirname(project_gdb)
+## export_metadata_folder = rf"{project_folder}\ArcGIS Metadata"
+##
+## in_xml = r"{os.environ['USERPROFILE']}\Documents\ArcGIS\Projects\DisMAP-ArcGIS-Analysis\May 1 2024\ArcGIS Metadata\DisMAP_Regions.xml"
+## out_xml = r"{os.environ['USERPROFILE']}\Documents\ArcGIS\Projects\DisMAP-ArcGIS-Analysis\May 1 2024\ArcGIS Metadata\DisMAP Regions Current.xml"
+##
+## diff = compare_metadata_xml(in_xml, out_xml)
+## if diff:
+## diff_metadata = rf"{export_metadata_folder}\{table} EXACT_COPY DIFF of Import.xml"
+## with open(diff_metadata, "w") as f:
+## f.write(diff)
+## del f
+## del diff_metadata
+## else:
+## pass
+##
+## del diff
+##
+## del in_xml, out_xml
+## del table, project_folder, export_metadata_folder
+## # ###--->>> COMPARE TWO XML DOCUMENTS
+
+ #pretty_format_xml_file(r"{os.environ['USERPROFILE']}\AppData\Local\ESRI\ArcGISPro\Staging\SharingProcesses\SharingMainLog - Copy.xml")
+ #pretty_format_xml_file(r"{os.environ['USERPROFILE']}\Documents\ArcGIS\Projects\DisMAP-ArcGIS-Analysis\July 1 2024\Export Metadata\EBS_IDW.crf.xml")
+
+## Doesn't really work
+## # ###--->>>
+## def xml_json_test(project_gdb, metadata_xml):
+##
+## from xml.dom import minidom
+## import json
+##
+## project_folder = os.path.dirname(project_gdb)
+## metadata_folder = rf"{project_folder}\ArcGIS Metadata"
+## export_metadata_folder = rf"{project_folder}\Export Metadata"
+## metadata_xml_path = rf"{metadata_folder}\{metadata_xml}"
+##
+## def parse_element(element):
+## dict_data = dict()
+## if element.nodeType == element.TEXT_NODE:
+## dict_data['data'] = element.data
+## if element.nodeType not in [element.TEXT_NODE, element.DOCUMENT_NODE,
+## element.DOCUMENT_TYPE_NODE]:
+## for item in element.attributes.items():
+## dict_data[item[0]] = item[1]
+## if element.nodeType not in [element.TEXT_NODE, element.DOCUMENT_TYPE_NODE]:
+## for child in element.childNodes:
+## child_name, child_dict = parse_element(child)
+## if child_name in dict_data:
+## try:
+## dict_data[child_name].append(child_dict)
+## except AttributeError:
+## dict_data[child_name] = [dict_data[child_name], child_dict]
+## else:
+## dict_data[child_name] = child_dict
+## return element.nodeName, dict_data
+##
+## #dom = minidom.parse('data.xml')
+## #f = open('data.json', 'w')
+## dom = minidom.parse(metadata_xml_path)
+## f = open(rf"{export_metadata_folder}\{metadata_xml.replace('.xml', '.json')}", "w")
+## f.write(json.dumps(parse_element(dom), sort_keys=True, indent=4))
+## f.close()
+##
+## del minidom, json
+##
+## metadata_xml = "AI_Sample_Locations_20230401.xml"
+##
+## xml_json_test(project_gdb, metadata_xml)
+##
+## del metadata_xml, xml_json_test
+##
+## # ###--->>>
+
+
+## # ###--->>>
+## from arcpy import metadata as md
+##
+## arcpy.env.overwriteOutput = True
+## arcpy.env.workspace = rf"{os.environ['USERPROFILE']}\Documents\ArcGIS\Projects\DisMAP-ArcGIS-Analysis\DisMAP.gdb"
+##
+## arcpy.management.CreateFeatureclass(arcpy.env.workspace, "temp_fc", "POLYGON")
+## dataset = rf"{arcpy.env.workspace}\temp_fc"
+## dataset_xml = r"{os.environ['USERPROFILE']}\Documents\ArcGIS\Projects\DisMAP-ArcGIS-Analysis\May 1 2024\Export Metadata\temp_fc.xml"
+## new_md_xml = r"{os.environ['USERPROFILE']}\Documents\ArcGIS\Projects\DisMAP-ArcGIS-Analysis\May 1 2024\Export Metadata\new_md.xml"
+##
+## poc_template = r"{os.environ['USERPROFILE']}\Documents\ArcGIS\Projects\DisMAP-ArcGIS-Analysis\May 1 2024\ArcGIS Metadata\poc_template.xml"
+##
+## # dataset_md = md.Metadata(dataset)
+## # dataset_md.saveAsXML(dataset_xml.replace(".xml", " no sync.xml"))
+## # pretty_format_xml_file(dataset_xml.replace(".xml", " no sync.xml"))
+## # dataset_md.synchronize("ALWAYS")
+## # dataset_md.save()
+## # dataset_md.reload()
+## # dataset_md.title = 'My Title'
+## # dataset_md.tags = 'Tag1, Tag2'
+## # dataset_md.summary = 'My Summary'
+## # dataset_md.description = 'My Description'
+## # dataset_md.credits = 'My Credits'
+## # dataset_md.accessConstraints = 'My Access Constraints'
+## # dataset_md.save()
+## # dataset_md.reload()
+## # dataset_md.saveAsXML(dataset_xml.replace(".xml", " ALWAYS sync.xml"))
+## # pretty_format_xml_file(dataset_xml.replace(".xml", " ALWAYS sync.xml"))
+##
+## #del dataset_md
+## #dataset_md = md.Metadata(dataset)
+##
+## # Create a new Metadata object, add some content to it, and then save
+## new_md = md.Metadata()
+## # new_md.title = 'My Title'
+## # new_md.tags = 'Tag1, Tag2'
+## # new_md.summary = 'My Summary'
+## # new_md.description = 'My Description'
+## # new_md.credits = 'My Credits'
+## # new_md.accessConstraints = 'My Access Constraints'
+## new_md.importMetadata(new_md_xml)
+## new_md.importMetadata(poc_template)
+## #new_md.save()
+## #new_md.reload()
+## new_md.saveAsXML(new_md_xml)
+## pretty_format_xml_file(new_md_xml)
+##
+## del new_md
+##
+## # dataset_md.synchronize("ACCESSED")
+## # dataset_md.importMetadata(new_md_xml)
+## # dataset_md.synchronize("CREATED")
+## # dataset_md.save()
+## # dataset_md.reload()
+## #
+## # del dataset_md
+## # dataset_md = md.Metadata(dataset)
+## #
+## # dataset_md.importMetadata(poc_template)
+## # #dataset_md.save()
+## # #dataset_md.reload()
+## # #dataset_md.synchronize("ALWAYS")
+## # #dataset_md.synchronize("SELECTIVE")
+## # dataset_md.save()
+## # dataset_md.reload()
+##
+##
+## del new_md_xml
+## del poc_template
+## #
+## # #dataset_md.saveAsXML(dataset_xml.replace(".xml", " new data.xml"))
+## # dataset_md.saveAsXML(dataset_xml.replace(".xml", " new data.xml"), "REMOVE_MACHINE_NAMES")
+## # pretty_format_xml_file(dataset_xml.replace(".xml", " new data.xml"))
+## #
+## # del dataset_md
+##
+## del dataset, dataset_xml
+## del md
+## # ###--->>>
+
+## # ###--->>>
+## from arcpy import metadata as md
+##
+## metadata_folder = r"{os.environ['USERPROFILE']}\Documents\ArcGIS\Projects\DisMAP-ArcGIS-Analysis\May 1 2024\Export Metadata"
+## new_md_xml = rf"{metadata_folder}\new_md.xml"
+## dataset_md_xml = rf"{metadata_folder}\dataset_md.xml"
+##
+## # Create a new Metadata object, add some content to it, and then save
+## new_md = md.Metadata()
+## new_md.title = 'My Title'
+## new_md.tags = 'Tag1, Tag2'
+## # new_md.summary = 'My Summary'
+## # new_md.description = 'My Description'
+## # new_md.credits = 'My Credits'
+## # new_md.accessConstraints = 'My Access Constraints'
+## new_md.saveAsXML(new_md_xml)
+## pretty_format_xml_file(new_md_xml)
+## new_md.saveAsXML(dataset_md_xml)
+## pretty_format_xml_file(dataset_md_xml)
+## del new_md
+##
+## dataset_md = md.Metadata(dataset_md_xml)
+## dataset_md.importMetadata("""20240701000000001.0FALSE""")
+## dataset_md.synchronize("NOT_CREATED")
+## dataset_md.save()
+## dataset_md.reload()
+## dataset_md.saveAsXML(dataset_md_xml)
+## pretty_format_xml_file(dataset_md_xml)
+## del dataset_md
+##
+## # Declared Variables
+## del dataset_md_xml
+## del new_md_xml
+## del metadata_folder
+##
+## # Imports
+## del md
+##
+## # ###--->>>
+
+## # ###--->>>
+## import datetime
+## import pytz
+##
+## #unaware = datetime.datetime(2011, 8, 15, 8, 15, 12, 0)
+## #aware = datetime.datetime(2011, 8, 15, 8, 15, 12, 0, pytz.UTC)
+##
+## unaware = datetime.datetime(2011, 1, 1, 0, 0, 0, 0)
+## aware = datetime.datetime(2011, 1, 1, 0, 0, 0, 0, pytz.UTC)
+##
+##
+## now_aware = pytz.utc.localize(unaware)
+## assert aware == now_aware
+##
+## arcpy.AddMessage(now_aware)
+##
+## from datetime import datetime, timezone
+##
+## dt = datetime(2011, 1, 1, 0, 0, 0, 0)
+## dt = dt.replace(tzinfo=timezone.utc)
+## arcpy.AddMessage(dt.isoformat())
+##
+## del datetime, timezone, dt
+##
+##
+## import pandas as pd
+## df = pd.DataFrame({"Year": [2014, 2015, 2016],})
+## #df.insert(df.columns.get_loc("Year")+1, "StdTime", pd.to_datetime(df["Year"], format="%Y").dt.tz_localize('Etc/GMT'))
+## df.insert(df.columns.get_loc("Year")+1, "StdTime", pd.to_datetime(df["Year"], format="%Y", utc=True))
+## #df.insert(df.columns.get_loc("Year")+1, "StdTime", pd.to_datetime(df["Year"], format="%Y"))
+##
+## arcpy.AddMessage(df)
+##
+## del pd, df
+## # ###--->>>
+
+ # Function parameters
+ del project_gdb
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def test_bed_2(project=""):
+ try:
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ base_project_folder = os.path.dirname(os.path.dirname(__file__))
+ project_gdb = rf"{base_project_folder}\{project}\{project}.gdb"
+
+ del base_project_folder
+
+ # Test if passed workspace exists, if not raise SystemExit
+ if not arcpy.Exists(project_gdb):
+ raise SystemExit(f"{os.path.basename(project_gdb)} is missing!!")
+ else:
+ pass
+
+## # ###--->>>
+## from arcpy import metadata as md
+##
+## project_folder = os.path.dirname(project_gdb)
+##
+## metadata_folder = rf"{project_folder}\Export Metadata"
+## new_md_xml = rf"{metadata_folder}\new_md.xml"
+## dataset_md_xml = rf"{metadata_folder}\dataset_md.xml"
+##
+## # Create a new Metadata object, add some content to it, and then save
+## new_md = md.Metadata()
+## #new_md.title = 'My Title'
+## new_md.tags = 'Tag1, Tag2'
+## new_md.summary = 'My Summary'
+## new_md.description = 'My Description'
+## new_md.credits = 'My Credits'
+## new_md.accessConstraints = 'My Access Constraints'
+## # Save the original
+## new_md.saveAsXML(new_md_xml)
+##
+## # Pretty Format
+## pretty_format_xml_file(new_md_xml)
+##
+## # Save a copy to modify
+## new_md.saveAsXML(dataset_md_xml)
+##
+## # Pretty Format
+## pretty_format_xml_file(dataset_md_xml)
+## del new_md
+##
+## dataset_md = md.Metadata(dataset_md_xml)
+## #dataset_md.synchronize("ALWAYS")
+## #dataset_md.importMetadata("""20240701000000001.0FALSE""")
+## #dataset_md.importMetadata(md.Metadata("""My Title"""))
+##
+## #dataset_md.synchronize("SELECTIVE")
+## #dataset_md.importMetadata(rf"{metadata_folder}\AI_IDW_Sample_Locations.xml")
+## #dataset_md.synchronize("CREATED")
+##
+## #dataset_md.synchronize("CREATED")
+## #dataset_md.save()
+## #dataset_md.reload()
+## dataset_md.importMetadata(rf"{metadata_folder}\poc_template.xml")
+## dataset_md.save()
+## dataset_md.synchronize("SELECTIVE")
+## #arcpy.AddMessage(dataset_md.xml)
+## #dataset_md.save()
+## #dataset_md.reload()
+## # Save the modified file
+## #dataset_md.saveAsXML(dataset_md_xml)
+##
+## # Pretty Format
+## pretty_format_xml_file(dataset_md_xml)
+##
+## del dataset_md
+##
+## # Declared Variables
+## del dataset_md_xml
+## del new_md_xml
+## del metadata_folder, project_folder
+##
+## # Imports
+## del md
+##
+## # ###--->>>
+
+## # ###--->>>
+##
+## project_folder = os.path.dirname(project_gdb)
+## metadata_folder = rf"{project_folder}\Export Metadata"
+## xml_file_1 = rf"{metadata_folder}\AI_IDW_Sample_Locations.xml"
+## xml_file_2 = rf"{metadata_folder}\poc_template.xml"
+##
+## xml_combiner(project_gdb=project_gdb, xml_file_1=xml_file_1, xml_file_2=xml_file_2)
+##
+## del project_folder, metadata_folder
+## del xml_file_1, xml_file_2
+##
+## # ###--->>>
+
+## # ###--->>>
+##
+## from arcpy import metadata as md
+##
+## project_folder = os.path.dirname(project_gdb)
+## metadata_folder = rf"{project_folder}\Export Metadata"
+##
+## # ArcPy Environments
+## arcpy.env.overwriteOutput = True
+## arcpy.env.parallelProcessingFactor = "100%"
+## arcpy.env.workspace = project_gdb
+## arcpy.env.scratchWorkspace = rf"Scratch\\scratch.gdb"
+## arcpy.SetLogMetadata(True)
+##
+## metadata_dictionary = dataset_title_dict(project_gdb)
+##
+## dataset = os.path.join(project_gdb, "DisMAP_Regions")
+## table = os.path.basename(dataset)
+##
+## # #arcpy.conversion.FeaturesToJSON(
+## # # in_features = dataset,
+## # # out_json_file = rf"{metadata_folder}\{table}.json",
+## # # format_json = "FORMATTED",
+## # # include_z_values = "NO_Z_VALUES",
+## # # include_m_values = "NO_M_VALUES",
+## # # geoJSON = "NO_GEOJSON",
+## # # outputToWGS84 = "KEEP_INPUT_SR",
+## # # use_field_alias = "USE_FIELD_NAME"
+## # # )
+
+
+## arcpy.AddMessage(f"JSON To Features")
+##
+## arcpy.conversion.JSONToFeatures(
+## in_json_file = rf"{metadata_folder}\{table}.json",
+## out_features = dataset,
+## geometry_type = "POLYLINE"
+## )
+
+## arcpy.AddMessage(f"Dataset: {table}")
+##
+## dataset_md = md.Metadata(dataset)
+## #dataset_md.xml = '20240701000000001.0FALSE'
+## dataset_md.synchronize("ALWAYS")
+## out_xml = rf"{metadata_folder}\{table}_Step_1.xml"
+## dataset_md.saveAsXML(out_xml)
+##
+## pretty_format_xml_file(out_xml)
+## del out_xml
+##
+## dataset_md.importMetadata(rf"{metadata_folder}\poc_john.xml")
+## dataset_md.synchronize("SELECTIVE")
+## dataset_md.save()
+## dataset_md.reload()
+## out_xml = rf"{metadata_folder}\{table}_Step_2.xml"
+## dataset_md.saveAsXML(out_xml)
+##
+## pretty_format_xml_file(out_xml)
+## del out_xml
+##
+## dataset_md.title = metadata_dictionary[table]["Dataset Service Title"]
+## dataset_md.tags = metadata_dictionary[table]["Tags"]
+## dataset_md.summary = metadata_dictionary[table]["Summary"]
+## dataset_md.description = metadata_dictionary[table]["Description"]
+## dataset_md.credits = metadata_dictionary[table]["Credits"]
+## dataset_md.accessConstraints = metadata_dictionary[table]["Access Constraints"]
+## dataset_md.save()
+## dataset_md.reload()
+## out_xml = rf"{metadata_folder}\{table}_Step_3.xml"
+## dataset_md.saveAsXML(out_xml)
+##
+## pretty_format_xml_file(out_xml)
+## del out_xml
+##
+## del dataset_md
+
+
+## # arcpy.AddMessage(f"\tDataset Service: {datasets_dict[dataset]['Dataset Service']}")
+## # arcpy.AddMessage(f"\tDataset Service Title: {datasets_dict[dataset]['Dataset Service Title']}")
+##
+## # # https://pro.arcgis.com/en/pro-app/latest/arcpy/metadata/metadata-class.htm
+## # dataset_md = md.Metadata(dataset)
+## # dataset_md.xml = ''
+## # dataset_md.save()
+## # #dataset_md.synchronize("ALWAYS")
+## # dataset_md.save()
+## # del dataset_md
+## #
+## # dataset_md = md.Metadata(dataset)
+## # dataset_md.importMetadata(rf"{metadata_folder}\poc_template.xml")
+## # dataset_md.save()
+## # del dataset_md
+## #
+## # dataset_md = md.Metadata(dataset)
+## # dataset_md.synchronize("ALWAYS")
+## # dataset_md.importMetadata(rf"{metadata_folder}\dismap_regions_entity.xml")
+## # dataset_md.save()
+## # del dataset_md
+##
+## dataset_md = md.Metadata(dataset)
+## dataset_md.synchronize("ALWAYS")
+## dataset_md.save()
+## dataset_md.reload()
+## #dataset_md.synchronize("SELECTIVE")
+## dataset_md.title = metadata_dictionary[table]["Dataset Service Title"]
+## dataset_md.tags = metadata_dictionary[table]["Tags"]
+## dataset_md.summary = metadata_dictionary[table]["Summary"]
+## dataset_md.description = metadata_dictionary[table]["Description"]
+## dataset_md.credits = metadata_dictionary[table]["Credits"]
+## dataset_md.accessConstraints = metadata_dictionary[table]["Access Constraints"]
+## dataset_md.save()
+## del dataset_md
+##
+## dataset_md = md.Metadata(dataset)
+##
+## #out_xml = rf"{metadata_folder}\{dataset_md.title}.xml"
+## out_xml = rf"{metadata_folder}\{table}.xml"
+## dataset_md.saveAsXML(out_xml, "REMOVE_ALL_SENSITIVE_INFO")
+## #dataset_md.saveAsXML(out_xml, "REMOVE_MACHINE_NAMES")
+## #dataset_md.saveAsXML(out_xml)
+##
+## pretty_format_xml_file(out_xml)
+## del out_xml
+##
+## del dataset_md
+##
+## # arcpy.AddMessage(f"Dataset: {table}")
+## # dataset_md_path = rf"{metadata_folder}\{table}.xml"
+## # if arcpy.Exists(dataset_md_path):
+## # arcpy.AddMessage(f"\tMetadata File: {os.path.basename(dataset_md_path)}")
+## # from arcpy import metadata as md
+## # try:
+## # dataset_md = md.Metadata(dataset)
+## # # Import the standard-format metadata content to the target item
+## # if not dataset_md.isReadOnly:
+## # dataset_md.importMetadata(dataset_md_path, "ARCGIS_METADATA")
+## # dataset_md.save()
+## # dataset_md.reload()
+## # dataset_md.title = title
+## # dataset_md.save()
+## # dataset_md.reload()
+## #
+## # arcpy.AddMessage(f"\tExporting metadata file from {table}")
+## #
+## # out_xml = rf"{project_folder}\Export Metadata\{title} EXACT_COPY.xml"
+## # dataset_md.saveAsXML(out_xml, "EXACT_COPY")
+## #
+## # pretty_format_xml_file(out_xml)
+## # del out_xml
+## #
+## # del dataset_md, md
+## #
+## # except: # noqa: E722
+## # arcpy.AddError(f"\tDataset metadata import error!! {arcpy.GetMessages()}")
+## # else:
+## # arcpy.AddWarning(f"\tDataset missing metadata file!!")
+
+## del md
+## del project_folder, metadata_folder, metadata_dictionary
+## del dataset, table
+##
+## # ###--->>>
+
+
+## # ###--->>>
+## from arcpy import metadata as md
+##
+## base_project_folder = os.path.dirname(os.path.dirname(__file__))
+## project_gdb = rf"{base_project_folder}\{project}\{project}.gdb"
+## workspace = rf"{base_project_folder}\DisMAP.gdb"
+##
+## arcpy.env.overwriteOutput = True
+## arcpy.env.workspace = workspace
+##
+## arcpy.management.CreateFeatureclass(arcpy.env.workspace, "temp_fc", "POLYGON")
+##
+## dataset = rf"{workspace}\temp_fc"
+## dataset_xml = rf"{base_project_folder}\{project}\Export Metadata\temp_fc.xml"
+##
+## dataset_md = md.Metadata(dataset)
+## dataset_md.saveAsXML(dataset_xml.replace(".xml", " no sync.xml"), "REMOVE_ALL_SENSITIVE_INFO")
+## pretty_format_xml_file(dataset_xml.replace(".xml", " no sync.xml"))
+##
+## dataset_md.synchronize("ALWAYS")
+## dataset_md.save()
+##
+## dataset_md.saveAsXML(dataset_xml.replace(".xml", " ALWAYS.xml"), "REMOVE_ALL_SENSITIVE_INFO")
+## pretty_format_xml_file(dataset_xml.replace(".xml", " ALWAYS.xml"))
+##
+## dataset_md.title = 'My Title'
+## dataset_md.tags = 'Tag1, Tag2'
+## dataset_md.summary = 'My Summary'
+## dataset_md.description = 'My Description'
+## dataset_md.credits = 'My Credits'
+## dataset_md.accessConstraints = 'My Access Constraints'
+## dataset_md.save()
+## dataset_md.saveAsXML(dataset_xml.replace(".xml", " Title added.xml"), "REMOVE_ALL_SENSITIVE_INFO")
+## pretty_format_xml_file(dataset_xml.replace(".xml", " Title added.xml"))
+##
+## del dataset, dataset_xml
+## del dataset_md
+## del md
+## del base_project_folder, workspace
+## # ###--->>>
+
+ # Function parameters
+ del project_gdb
+ del project
+
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ traceback.print_exc()
+ sys.exit()
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except Exception:
+ #arcpy.AddError(arcpy.GetMessages(2))
+ #traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def script_tool(project_gdb=""):
+ try:
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ project_folder = os.path.dirname(project_gdb)
+
+ # Test if passed workspace exists, if not raise SystemExit
+ if not arcpy.Exists(project_gdb):
+ arcpy.AddMessage(f"{os.path.basename(project_gdb)} is missing!!")
+ else:
+ pass
+
+ # ###--->>> dataset_title_dict Test #1
+ DatasetTitleDict = False
+ if DatasetTitleDict:
+ md_dict = dataset_title_dict(project_gdb)
+ for key in sorted(md_dict):
+ arcpy.AddMessage(key)
+ #if "_CRF" in key:
+ arcpy.AddMessage(f"\tDataset Service Title: {md_dict[key]['Dataset Service Title']}")
+ arcpy.AddMessage(f"\tDataset Service: {md_dict[key]['Dataset Service']}")
+ arcpy.AddMessage(f"\tTags: {md_dict[key]['Tags']}")
+ del key
+ del md_dict
+ else:
+ pass
+ del DatasetTitleDict
+ # ###--->>>
+
+ # ###--->>> Test table_definitions
+ TestTableDefinitions = False
+ if TestTableDefinitions:
+ arcpy.AddMessage(os.path.basename(project_gdb))
+ from dev_create_table_definitions_json import get_list_of_table_fields
+ get_list_of_table_fields(project_gdb)
+ del get_list_of_table_fields
+ csv_data_folder = os.path.join(project_folder, "CSV_Data")
+ # First Test
+ _table_definitions = table_definitions(csv_data_folder, "HI_IDW")
+ arcpy.AddMessage(_table_definitions)
+ # Second Test
+ _table_definitions = table_definitions(csv_data_folder, "")
+ #arcpy.AddMessage(_table_definitions)
+ for table in _table_definitions:
+ arcpy.AddMessage(f"Table: {table}")
+## #arcpy.AddMessage(f"\t{_table_definitions[table]}");
+## for field in _table_definitions[table]:
+## arcpy.AddMessage(f"\tfield: {field}")
+## _field_definitions = field_definitions(csv_data_folder, field)
+## #arcpy.AddMessage(_field_definitions)
+ ## for field in field_definitions:
+ ## arcpy.AddMessage(field)
+ ## #arcpy.AddMessage(f"\t{field_definitions[field]}")
+ ## del field
+ ## else:
+ ## arcpy.AddMessage(f"Field: {field} is not in field_definitions")
+## del _field_definitions, field
+
+
+ del table
+ del _table_definitions
+ del csv_data_folder
+ else:
+ pass
+ del TestTableDefinitions
+ # ###--->>>
+
+ TestImportMetadata = True
+ if TestImportMetadata:
+ csv_data_folder = os.path.join(project_folder, "CSV_Data")
+ #table_name = "Datasets"
+ #table_name = "Species_Filter"
+ #table_name = "DisMAP_Survey_Info"
+ #table_name = "HI_IDW_Mosaic"
+ #table_name = "HI_IDW_Fishnet_Bathymetry"
+ table_name = "Indicators"
+
+ try:
+
+ import_metadata(csv_data_folder, dataset=rf"{project_gdb}\{table_name}")
+ #import_metadata(csv_data_folder, dataset=rf"{project_folder}\Scratch\HI_IDW.gdb\{table_name}")
+
+ except: # noqa: E722
+ pass
+
+ del table_name, csv_data_folder
+ else:
+ pass
+ del TestImportMetadata
+
+ # Declared variables
+ del project_folder
+ # Function parameters
+ del project_gdb
+
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}\nCompleted: {strftime('%a %b %d %I:%M %p', localtime())}")
+ arcpy.AddMessage(u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time))))
+ arcpy.AddMessage(f"{'-' * 80}")
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+ except KeyboardInterrupt:
+ sys.exit()
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ traceback.print_exc()
+ sys.exit()
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+if __name__ == "__main__":
+ try:
+
+ project_gdb = arcpy.GetParameterAsText(0)
+ if not project_gdb:
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
+ else:
+ pass
+
+ arcpy.AddMessage(f"Running Python script: {os.path.basename(__file__)}")
+
+ script_tool(project_gdb)
+
+ arcpy.SetParameterAsText(1, "Result")
+
+ del project_gdb
+
+ except SystemExit:
+ pass
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ else:
+ pass
+ finally:
+ sys.exit()
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/dismap_version_project_setup.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/dismap_version_project_setup.py
new file mode 100644
index 0000000..505c732
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/dismap_version_project_setup.py
@@ -0,0 +1,154 @@
+"""
+Script documentation
+- Tool parameters are accessed using arcpy.GetParameter() or
+ arcpy.GetParameterAsText()
+- Update derived parameter values using arcpy.SetParameter() or
+ arcpy.SetParameterAsText()
+"""
+import os
+import arcpy
+import traceback
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def script_tool(base_project_folder="", new_project_folder="", project_folders=""):
+ """Script code goes below"""
+ try:
+ arcpy.env.overwriteOutput = True
+ try:
+ aprx = arcpy.mp.ArcGISProject("CURRENT")
+ except: # noqa: E722
+ aprx = arcpy.mp.ArcGISProject(rf"{base_project_folder}\DisMAP.aprx")
+
+ aprx.save()
+ home_folder = aprx.homeFolder
+ if not arcpy.Exists(rf"{home_folder}\{new_project_folder}"):
+ arcpy.AddMessage(f"Creating Home Folder: '{os.path.basename(home_folder)}'")
+ arcpy.management.CreateFolder(home_folder, new_project_folder)
+ arcpy.AddMessage(arcpy.GetMessages())
+ else:
+ arcpy.AddMessage(f"Home Folder: '{os.path.basename(home_folder)}' Exists")
+ if not arcpy.Exists(rf"{home_folder}\{new_project_folder}\{new_project_folder}.gdb"):
+ arcpy.AddMessage(f"Creating Project GDB: '{os.path.basename(home_folder)}.gdb'")
+ arcpy.management.CreateFileGDB(rf"{home_folder}\{new_project_folder}", f"{new_project_folder}")
+ arcpy.AddMessage(arcpy.GetMessages())
+ else:
+ arcpy.AddMessage(f"Project GDB: {new_project_folder}.gdb exists")
+ if not arcpy.Exists(rf"{home_folder}\{new_project_folder}\Scratch"):
+ arcpy.AddMessage("Creating the Scratch Folder")
+ arcpy.management.CreateFolder(rf"{home_folder}\{new_project_folder}", "Scratch")
+ arcpy.AddMessage(arcpy.GetMessages())
+ else:
+ arcpy.AddMessage(f"Scratch Folder: {new_project_folder} exists")
+ if not arcpy.Exists(rf"{home_folder}\{new_project_folder}\Scratch\scratch.gdb"):
+ arcpy.AddMessage("Creating the Scratch GDB")
+ arcpy.management.CreateFileGDB(rf"{home_folder}\{new_project_folder}\Scratch", "scratch")
+ arcpy.AddMessage(arcpy.GetMessages())
+ else:
+ arcpy.AddMessage("Scratch GDB Exists")
+ for _project_folder in project_folders.split(";"):
+ if not arcpy.Exists(rf"{home_folder}\{new_project_folder}\{_project_folder}"):
+ arcpy.AddMessage(f"Creating Folder: {_project_folder}")
+ arcpy.management.CreateFolder(rf"{home_folder}\{new_project_folder}", _project_folder)
+ arcpy.AddMessage(arcpy.GetMessages())
+ else:
+ arcpy.AddMessage(f"Folder: '{_project_folder}' Exists")
+ del _project_folder
+ if not arcpy.Exists(rf"{home_folder}\{new_project_folder}\{new_project_folder}.aprx"):
+ aprx.saveACopy(rf"{home_folder}\{new_project_folder}\{new_project_folder}.aprx")
+ arcpy.AddMessage(arcpy.GetMessages())
+ else:
+ pass
+
+ _aprx = arcpy.mp.ArcGISProject(rf"{home_folder}\{new_project_folder}\{new_project_folder}.aprx")
+ # Remove maps
+ _maps = _aprx.listMaps()
+ if len(_maps) > 0:
+ for _map in _maps:
+ arcpy.AddMessage(_map.name)
+ aprx.deleteItem(_map)
+ del _map
+ del _maps
+ _aprx.save()
+
+ databases = []
+ databases.append({"databasePath": rf"{home_folder}\{new_project_folder}\{new_project_folder}.gdb", "isDefaultDatabase": True})
+ _aprx.updateDatabases(databases)
+ arcpy.AddMessage(f"Databases: {databases}")
+ del databases
+ _aprx.save()
+
+ toolboxes = []
+ toolboxes.append({"toolboxPath": rf"{home_folder}\DisMAP.atbx", "isDefaultToolbox": True})
+ _aprx.updateToolboxes(toolboxes)
+ arcpy.AddMessage(f"Toolboxes: {toolboxes}")
+ del toolboxes
+ _aprx.save()
+ del _aprx
+
+ # Declared variables
+ del home_folder, aprx
+ # Function parameters
+ del new_project_folder, project_folders
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ #raise SystemExit
+ except SystemExit:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ #raise SystemExit
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ #raise SystemExit
+ except: # noqa: E722 # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ #raise SystemExit
+ else:
+ pass
+ return True
+ finally:
+ pass
+if __name__ == "__main__":
+ try:
+ base_project_folder = arcpy.GetParameterAsText(0)
+ new_project_folder = arcpy.GetParameterAsText(1)
+ project_folders = arcpy.GetParameterAsText(2)
+
+ if not base_project_folder:
+ base_project_folder = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python"
+ else:
+ pass
+
+ if not new_project_folder:
+ new_project_folder = "February 1 2026"
+ else:
+ pass
+
+ if not project_folders:
+ project_folders = "CRFs;CSV_Data;Dataset_Shapefiles;Images;Layers;Metadata_Export;Publish"
+ else:
+ pass
+
+ script_tool(base_project_folder, new_project_folder, project_folders)
+
+ arcpy.SetParameterAsText(3, "Result")
+
+ del base_project_folder, new_project_folder, project_folders
+
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/import_datasets_species_filter_csv_data.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/import_datasets_species_filter_csv_data.py
new file mode 100644
index 0000000..417385c
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/import_datasets_species_filter_csv_data.py
@@ -0,0 +1,468 @@
+"""
+Script documentation
+- Tool parameters are accessed using arcpy.GetParameter() or
+ arcpy.GetParameterAsText()
+- Update derived parameter values using arcpy.SetParameter() or
+ arcpy.SetParameterAsText()
+"""
+import os
+import sys # built-ins first
+import traceback
+import inspect
+import arcpy
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def get_encoding_index_col(csv_file):
+ try:
+ # Imports
+ import chardet
+ import pandas as pd
+ # Open the file in binary mode
+ with open(csv_file, 'rb') as f:
+ # Read the file's content
+ data = f.read()
+ # Detect the encoding using chardet.detect()
+ encoding_result = chardet.detect(data)
+ # Retrieve the encoding information
+ __encoding = encoding_result['encoding']
+ del f, data, encoding_result
+ # arcpy.AddMessage the detected encoding
+ #arcpy.AddMessage("Detected Encoding:", __encoding)
+ dtypes = {}
+ # Read the CSV file into a DataFrame
+ df = pd.read_csv(csv_file, encoding = __encoding, delimiter = ",",)
+ # Analyze the data types and lengths
+ for column in df.columns:
+ dtypes[column] = df[column].dtype
+ del column
+ first_column = list(dtypes.keys())[0]
+ __index_column = 0 if first_column == "Unnamed: 0" else None
+ # Declared Variables
+ del df, dtypes, first_column
+ # Import
+ del chardet, pd
+ # Function Parameter
+ del csv_file
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ raise SystemExit
+ except Exception:
+ traceback.print_exc()
+ arcpy.AddError(arcpy.GetMessages(2))
+ raise SystemExit
+ except: # noqa: E722
+ traceback.print_exc()
+ arcpy.AddError(arcpy.GetMessages(2))
+ raise SystemExit
+ else:
+ return __encoding, __index_column
+ finally:
+ pass
+
+def worker(project_gdb="", csv_file=""):
+ try:
+ # Test if passed workspace exists, if not raise SystemExit
+ if not arcpy.Exists(project_gdb) or not arcpy.Exists(csv_file):
+ raise SystemExit(f"{os.path.basename(project_gdb)} OR {os.path.basename(csv_file)} is missing!!")
+ # Imports
+ from arcpy import metadata as md
+ import dismap_tools
+ # Set History and Metadata logs, set serverity and message level
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+ # Set basic workkpace variables
+ table_name = os.path.basename(csv_file).replace(".csv", "")
+ csv_data_folder = os.path.dirname(csv_file)
+ project_folder = os.path.dirname(csv_data_folder)
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
+ # Set basic workkpace variables
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = r"Scratch\\scratch.gdb"
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ #arcpy.AddMessage(table_name)
+ #arcpy.AddMessage(csv_data_folder)
+ field_csv_dtypes = dismap_tools.dTypesCSV(csv_data_folder, table_name)
+ field_gdb_dtypes = dismap_tools.dTypesGDB(csv_data_folder, table_name)
+ #arcpy.AddMessage(field_csv_dtypes)
+ #arcpy.AddMessage(field_gdb_dtypes)
+ arcpy.AddMessage(f"\tCreating Table: {table_name}")
+ arcpy.management.CreateTable(project_gdb, f"{table_name}", "", "", table_name.replace("_", " "))
+ arcpy.AddMessage("\t{0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ import pandas as pd
+ import numpy as np
+ import warnings
+ arcpy.AddMessage(f"> Importing {table_name} CSV Table")
+ #csv_table = f"{table_name}.csv"
+ # https://pandas.pydata.org/pandas-docs/stable/getting_started/intro_tutorials/09_timeseries.html?highlight=datetime
+ # https://www.tutorialsandyou.com/python/numpy-data-types-66.html
+ #df = pd.read_csv('my_file.tsv', sep='\t', header=0) ## not setting the index_col
+ #df.set_index(['0'], inplace=True)
+ # C:\. . .\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\site-packages\numpy\lib\arraysetops.py:583:
+ # FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
+ # mask |= (ar1 == a)
+ # A fix: https://www.youtube.com/watch?v=TTeElATMpoI
+ # TLDR: pandas are Jedi; numpy are the hutts; and python is the galatic empire
+ #encoding, index_column = dismap_tools.get_encoding_index_col(csv_file)
+ encoding, index_column = get_encoding_index_col(csv_file)
+ with warnings.catch_warnings():
+ warnings.simplefilter(action='ignore', category=FutureWarning)
+ # DataFrame
+ df = pd.read_csv(
+ csv_file,
+ index_col = index_column,
+ encoding = encoding,
+ delimiter = ",",
+ dtype = field_csv_dtypes,
+ )
+ del encoding, index_column
+ #arcpy.AddMessage(field_csv_dtypes)
+ #arcpy.AddMessage(field_gdb_dtypes)
+ del field_csv_dtypes
+ #arcpy.AddMessage(df)
+ # Replace NaN with an empty string. When pandas reads a cell
+ # with missing data, it asigns that cell with a Null or nan
+ # value. So, we are changing that value to an empty string of ''.
+ # https://community.esri.com/t5/python-blog/those-pesky-null-things/ba-p/902664
+ # https://community.esri.com/t5/python-blog/numpy-snippets-6-much-ado-about-nothing-nan-stuff/ba-p/893702
+ df.fillna('', inplace=True)
+ #df.fillna(np.nan)
+ #df = df.replace({np.nan: None})
+ # Alternatively, apply to all columns at once
+ df = df.apply(lambda x: x.str.strip() if x.dtype == "object" else x)
+ arcpy.AddMessage(f">-> Creating the {table_name} Geodatabase Table")
+ try:
+ array = np.array(np.rec.fromrecords(df.values), dtype = field_gdb_dtypes)
+ except: # noqa: E722
+ traceback.print_exc()
+ raise SystemExit
+ del df
+ del field_gdb_dtypes
+ # Temporary table
+ tmp_table = rf"memory\{table_name.lower()}_tmp"
+ try:
+ arcpy.da.NumPyArrayToTable(array, tmp_table)
+ del array
+ # Captures ArcPy type of error
+ except: # noqa: E722
+ traceback.print_exc()
+ raise SystemExit
+ arcpy.AddMessage(f">-> Copying the {table_name} Table from memory to the GDB")
+ fields = [f.name for f in arcpy.ListFields(tmp_table) if f.type == "String"]
+ for field in fields:
+ arcpy.management.CalculateField(tmp_table, field=field, expression=f"'' if !{field}! is None else !{field}!")
+ arcpy.AddMessage("Calculate Field:\t{0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del field
+ del fields
+ dataset_path = rf"{project_gdb}\{table_name}"
+ arcpy.management.CopyRows(tmp_table, dataset_path, "")
+ arcpy.AddMessage("Copy Rows:\t{0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # Remove the temporary table
+ arcpy.management.Delete(tmp_table)
+ del tmp_table
+ #arcpy.conversion.ExportTable(in_table = dataset_path, out_table = rf"{csv_data_folder}\_{table_name}.csv", where_clause="", use_field_alias_as_name = "NOT_USE_ALIAS")
+ #arcpy.AddMessage("Export Table:\t{0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # Alter Fields
+ dismap_tools.alter_fields(csv_data_folder, dataset_path)
+ dismap_tools.import_metadata(csv_data_folder=csv_data_folder,dataset=dataset_path)
+ # Load Metadata
+ #dataset_md = md.Metadata(dataset_path)
+ #dataset_md.synchronize("ALWAYS")
+ #dataset_md.save()
+ #del dataset_md
+ arcpy.AddMessage(f"Compacting the {os.path.basename(project_gdb)} GDB")
+ arcpy.management.Compact(project_gdb)
+ arcpy.AddMessage("\t"+arcpy.GetMessages().replace("\n", "\n\t"))
+ # Basic variables
+ del dataset_path
+ del table_name, csv_data_folder, project_folder, scratch_workspace
+ # Imports
+ del dismap_tools, md, pd, np, warnings
+ # Function parameters
+ del project_gdb, csv_file
+ except KeyboardInterrupt:
+ raise SystemExit
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ raise SystemExit
+ except SystemExit:
+ sys.exit()
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ raise SystemExit
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ raise SystemExit
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+ finally:
+ pass
+def update_datecode(csv_file="", project_name=""):
+ try:
+ #sys.path.append(os.path.abspath('../dev'))
+ # Imports
+ import dismap_tools
+ import pandas as pd
+ import warnings
+ # Set History and Metadata logs, set serverity and message level
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+ table_name = os.path.basename(csv_file).replace(".csv", "")
+ csv_data_folder = os.path.dirname(csv_file)
+ # Set basic arcpy.env variables
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ field_csv_dtypes = dismap_tools.dTypesCSV(csv_data_folder, table_name)
+ arcpy.AddMessage(f"\tUpdating CSV file: {os.path.basename(csv_file)}")
+ #arcpy.AddMessage(f"\t\t{csv_file}")
+ # C:\. . .\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\site-packages\numpy\lib\arraysetops.py:583:
+ # FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
+ # mask |= (ar1 == a)
+ # A fix: https://www.youtube.com/watch?v=TTeElATMpoI
+ # TLDR: pandas are Jedi; numpy are the hutts; and python is the galatic empire
+ with warnings.catch_warnings():
+ warnings.simplefilter(action='ignore', category=FutureWarning)
+ # DataFrame
+ df = pd.read_csv(csv_file,
+ index_col = 0,
+ encoding = "utf-8",
+ delimiter = ',',
+ dtype = field_csv_dtypes,
+ )
+ old_date_code = df.DateCode.unique()[0]
+ arcpy.AddMessage(f"\tOld Date Code: {old_date_code}")
+ arcpy.AddMessage(f"\tNew Date Code: {dismap_tools.date_code(project_name)}")
+ df = df.replace(regex = old_date_code, value = dismap_tools.date_code(project_name))
+ df.to_csv(path_or_buf = f"{csv_file}", sep = ',')
+ del df, pd, warnings
+ del old_date_code
+ arcpy.AddMessage(f"\tCompleted updating CSV file: {os.path.basename(csv_file)}")
+ # Declared Variables
+ del field_csv_dtypes, table_name, csv_data_folder
+ # Imports
+ del dismap_tools
+ # Function parameters
+ del csv_file, project_name
+ except KeyboardInterrupt:
+ raise SystemExit
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ raise SystemExit
+ except SystemExit:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ raise SystemExit
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ raise SystemExit
+ except: # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ raise SystemExit
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+ finally:
+ pass
+def script_tool(project_folder=""):
+ """Script code goes below"""
+ try:
+ from lxml import etree
+ from arcpy import metadata as md
+ from io import StringIO
+ import dismap_tools
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+ # Imports
+ #from dev_import_datasets_species_filter_csv_data import worker
+ # Set basic arcpy.env variables
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ #project_folder = rf"{os.path.dirname(project_gdb)}"
+ project_name = rf"{os.path.basename(project_folder)}"
+ project_gdb = rf"{project_folder}\{project_name}.gdb"
+ home_folder = rf"{os.path.dirname(project_folder)}"
+ csv_data_folder = rf"{project_folder}\CSV_Data"
+ datasets_csv = rf"{csv_data_folder}\Datasets.csv"
+ species_filter_csv = rf"{csv_data_folder}\Species_Filter.csv"
+ survey_metadata_csv = rf"{csv_data_folder}\DisMAP_Survey_Info.csv"
+ SpeciesPersistenceIndicatorTrend = rf"{csv_data_folder}\SpeciesPersistenceIndicatorTrend.csv"
+ SpeciesPersistenceIndicatorPercentileBin = rf"{csv_data_folder}\SpeciesPersistenceIndicatorPercentileBin.csv"
+ arcpy.management.Copy(rf"{home_folder}\Initial Data\Datasets_{dismap_tools.date_code(project_name)}.csv", datasets_csv)
+ arcpy.management.Copy(rf"{home_folder}\Initial Data\Species_Filter_{dismap_tools.date_code(project_name)}.csv", species_filter_csv)
+ arcpy.management.Copy(rf"{home_folder}\Initial Data\DisMAP_Survey_Info_{dismap_tools.date_code(project_name)}.csv", survey_metadata_csv)
+ arcpy.management.Copy(rf"{home_folder}\Initial Data\SpeciesPersistenceIndicatorTrend_{dismap_tools.date_code(project_name)}.csv", SpeciesPersistenceIndicatorTrend)
+ arcpy.management.Copy(rf"{home_folder}\Initial Data\SpeciesPersistenceIndicatorPercentileBin_{dismap_tools.date_code(project_name)}.csv", SpeciesPersistenceIndicatorPercentileBin)
+ import json
+ json_path = rf"{csv_data_folder}\root_dict.json"
+ with open(json_path, "r") as json_file:
+ root_dict = json.load(json_file)
+ del json_file
+ del json_path
+ del json
+ contacts = rf"{home_folder}\Datasets\DisMAP Contacts 2026 02 01.xml"
+ datasets = [datasets_csv, species_filter_csv, survey_metadata_csv, SpeciesPersistenceIndicatorTrend, SpeciesPersistenceIndicatorPercentileBin]
+ for dataset in datasets:
+ arcpy.AddMessage(rf"Metadata for: {os.path.basename(dataset)}")
+ dataset_md = md.Metadata(dataset)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ dataset_md.importMetadata(contacts, "ARCGIS_METADATA")
+ dataset_md.save()
+ dataset_md.synchronize("OVERWRITE")
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ target_tree = etree.parse(StringIO(dataset_md.xml), parser=etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root = target_tree.getroot()
+ target_root[:] = sorted(target_root, key=lambda x: root_dict[x.tag]) # noqa: F821
+ new_item_name = target_root.find("Esri/DataProperties/itemProps/itemName").text
+ #arcpy.AddMessage(new_item_name)
+ etree.indent(target_root, space=' ')
+ dataset_md.xml = etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ #arcpy.AddMessage(dataset_md.xml)
+ del dataset_md
+ del dataset
+ del datasets
+ del csv_data_folder
+ #
+ UpdateDatecode = False
+ if UpdateDatecode:
+ # Update DateCode
+ #arcpy.AddMessage(datasets_csv)
+ arcpy.AddMessage(project_name)
+ update_datecode(csv_file=datasets_csv, project_name=project_name)
+ del UpdateDatecode
+ #
+ DatasetsCSVFile = True
+ if DatasetsCSVFile:
+ worker(project_gdb=project_gdb, csv_file=datasets_csv)
+ del DatasetsCSVFile
+ #
+ SpeciesFilterCSVFile = False
+ if SpeciesFilterCSVFile:
+ worker(project_gdb=project_gdb, csv_file=species_filter_csv)
+ del SpeciesFilterCSVFile
+ #
+ DisMAPSurveyInfoFile = False
+ if DisMAPSurveyInfoFile:
+ worker(project_gdb=project_gdb, csv_file=survey_metadata_csv)
+ del DisMAPSurveyInfoFile
+ #
+ SpeciesPersistenceIndicatorPercentileBinFile = False
+ if SpeciesPersistenceIndicatorPercentileBinFile:
+ worker(project_gdb=project_gdb, csv_file=SpeciesPersistenceIndicatorPercentileBin)
+ del SpeciesPersistenceIndicatorPercentileBinFile
+ #
+ SpeciesPersistenceIndicatorTrendFile = False
+ if SpeciesPersistenceIndicatorTrendFile:
+ worker(project_gdb=project_gdb, csv_file=SpeciesPersistenceIndicatorTrend)
+ del SpeciesPersistenceIndicatorTrendFile
+ # # # # # #
+ # Declared Varaiables
+ del SpeciesPersistenceIndicatorPercentileBin, SpeciesPersistenceIndicatorTrend
+ del datasets_csv, species_filter_csv, survey_metadata_csv, home_folder, project_name
+ # Declared Variables
+ del contacts, target_tree, target_root, new_item_name, root_dict
+ # Imports
+ del etree, md, StringIO, dismap_tools
+ # Function Parameters
+ del project_folder
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}\nCompleted: {strftime('%a %b %d %I:%M %p', localtime())}")
+ arcpy.AddMessage(u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time))))
+ arcpy.AddMessage(f"{'-' * 80}")
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == '__main__':
+ try:
+
+ project_folder = arcpy.GetParameterAsText(0)
+ if not project_folder:
+ project_folder = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026")
+ else:
+ pass
+
+ script_tool(project_folder)
+
+ arcpy.SetParameterAsText(1, "Result")
+
+ del project_folder
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/publish_to_portal_director.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/publish_to_portal_director.py
new file mode 100644
index 0000000..c534b04
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/publish_to_portal_director.py
@@ -0,0 +1,1324 @@
+# -*- coding: utf-8 -*-
+# -------------------------------------------------------------------------------
+# Name: module1
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 03/03/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+# -------------------------------------------------------------------------------
+import os
+import sys
+import traceback
+
+import arcpy # third-parties second
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ #filename = sys.path[0] + os.sep + f"{os.path.basename(__file__)}"
+ filename = os.path.basename(__file__)
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def feature_sharing_draft_report(sd_draft=""):
+ try:
+ import xml.dom.minidom as DOM
+
+ docs = DOM.parse(sd_draft)
+ key_list = docs.getElementsByTagName("Key")
+ value_list = docs.getElementsByTagName("Value")
+
+ for i in range(key_list.length):
+ value = f"Value: {value_list[i].firstChild.nodeValue}" if value_list[i].firstChild else "Value is missing"
+
+ arcpy.AddMessage(f"\t\tKey: {key_list[i].firstChild.nodeValue:<45} {value}")
+ # arcpy.AddMessage(f"\t\tKey: {key_list[i].firstChild.nodeValue:<45} {value[:50]}")
+ del i, value
+
+ del DOM, key_list, value_list, docs
+ del sd_draft
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def create_feature_class_layers(project_gdb=""):
+ try:
+ # Import
+ from arcpy import metadata as md
+ from dismap_tools import dataset_title_dict, parse_xml_file_format_and_save, clear_folder
+
+ # Test if passed workspace exists, if not sys.exit()
+ if not arcpy.Exists(project_gdb):
+ sys.exit()(f"{os.path.basename(project_gdb)} is missing!!")
+
+ # Set History and Metadata logs, set serverity and message level
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic workkpace variables
+ project_folder = os.path.dirname(project_gdb)
+ project_name = os.path.basename(project_folder)
+ csv_data_folder = rf"{project_folder}\CSV_Data"
+ scratch_folder = os.path.join(project_folder, "Scratch")
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
+
+ # Clear Scratch Folder
+ clear_folder(folder=scratch_folder)
+
+ # Create Scratch Workspace for Project
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ if not arcpy.Exists(scratch_folder):
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", "scratch")
+
+ # Set basic workkpace variables
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ aprx = arcpy.mp.ArcGISProject(rf"{project_folder}\{project_name}.aprx")
+
+ del scratch_folder, scratch_workspace
+
+ arcpy.AddMessage("Loading the Dataset Title Dictionary. Please wait")
+ datasets_dict = dataset_title_dict(project_gdb)
+
+ datasets = []
+
+ #datasets.extend(arcpy.ListFeatureClasses("AI_IDW_Sample_Locations"))
+ datasets.extend(arcpy.ListFeatureClasses("*Sample_Locations"))
+ datasets.extend(arcpy.ListFeatureClasses("DisMAP_Regions"))
+ datasets.extend(arcpy.ListTables("Indicators"))
+ datasets.extend(arcpy.ListTables("Species_Filter"))
+ datasets.extend(arcpy.ListTables("DisMAP_Survey_Info"))
+ datasets.extend(arcpy.ListTables("SpeciesPersistenceIndicatorPercentileBin"))
+ datasets.extend(arcpy.ListTables("SpeciesPersistenceIndicatorTrend"))
+
+ for dataset in sorted(datasets):
+
+ feature_service_title = datasets_dict[dataset]["Dataset Service Title"]
+
+ arcpy.AddMessage(f"Dataset: {dataset}")
+ arcpy.AddMessage(f"\tTitle: {feature_service_title}")
+
+ desc = arcpy.da.Describe(dataset)
+
+ feature_class_path = rf"{project_gdb}\{dataset}"
+
+ if desc["dataType"] == "FeatureClass":
+
+ arcpy.AddMessage("\tMake Feature Layer")
+ feature_class_layer = arcpy.management.MakeFeatureLayer(feature_class_path, feature_service_title)
+ feature_class_layer_file = rf"{project_folder}\Layers\{feature_class_layer}.lyrx"
+
+ arcpy.AddMessage("\tSave Layer File")
+ _result = arcpy.management.SaveToLayerFile(
+ in_layer = feature_class_layer,
+ out_layer = feature_class_layer_file,
+ is_relative_path = "RELATIVE",
+ version = "CURRENT"
+ )
+ del _result
+
+ arcpy.management.Delete(feature_class_layer)
+ del feature_class_layer
+
+ elif desc["dataType"] == "Table":
+
+ arcpy.AddMessage("\tMake Table View")
+ feature_class_layer = arcpy.management.MakeTableView(
+ in_table = feature_class_path,
+ out_view = feature_service_title,
+ where_clause = "",
+ workspace = project_gdb,
+ field_info = "OBJECTID OBJECTID VISIBLE NONE;DatasetCode DatasetCode VISIBLE NONE;Region Region VISIBLE NONE;Season Season VISIBLE NONE;DateCode DateCode VISIBLE NONE;Species Species VISIBLE NONE;CommonName CommonName VISIBLE NONE;CoreSpecies CoreSpecies VISIBLE NONE;Year Year VISIBLE NONE;DistributionProjectName DistributionProjectName VISIBLE NONE;DistributionProjectCode DistributionProjectCode VISIBLE NONE;SummaryProduct SummaryProduct VISIBLE NONE;CenterOfGravityLatitude CenterOfGravityLatitude VISIBLE NONE;MinimumLatitude MinimumLatitude VISIBLE NONE;MaximumLatitude MaximumLatitude VISIBLE NONE;OffsetLatitude OffsetLatitude VISIBLE NONE;CenterOfGravityLatitudeSE CenterOfGravityLatitudeSE VISIBLE NONE;CenterOfGravityLongitude CenterOfGravityLongitude VISIBLE NONE;MinimumLongitude MinimumLongitude VISIBLE NONE;MaximumLongitude MaximumLongitude VISIBLE NONE;OffsetLongitude OffsetLongitude VISIBLE NONE;CenterOfGravityLongitudeSE CenterOfGravityLongitudeSE VISIBLE NONE;CenterOfGravityDepth CenterOfGravityDepth VISIBLE NONE;MinimumDepth MinimumDepth VISIBLE NONE;MaximumDepth MaximumDepth VISIBLE NONE;OffsetDepth OffsetDepth VISIBLE NONE;CenterOfGravityDepthSE CenterOfGravityDepthSE VISIBLE NONE"
+ )
+ feature_class_layer_file = rf"{project_folder}\Layers\{feature_class_layer}.lyrx"
+
+ arcpy.AddMessage("\tSave Layer File")
+ _result = arcpy.management.SaveToLayerFile(
+ in_layer = feature_class_layer,
+ out_layer = feature_class_layer_file,
+ is_relative_path = "RELATIVE",
+ version = "CURRENT"
+ )
+ del _result
+
+ arcpy.management.Delete(feature_class_layer)
+ del feature_class_layer
+
+ else:
+ pass
+
+ if [f.name for f in arcpy.ListFields(feature_class_path) if f.name == "StdTime"]:
+ arcpy.AddMessage("\tSet Time Enabled if time field is in dataset")
+ # Get time information from a layer in a layer file
+ layer_file = arcpy.mp.LayerFile(feature_class_layer_file)
+ layer = layer_file.listLayers()[0]
+ layer.enableTime("StdTime", "StdTime", True)
+ layer.time.timeZone = arcpy.mp.ListTimeZones("(UTC) Coordinated Universal Time")[0]
+ layer_file.save()
+ del layer
+
+ for layer in layer_file.listLayers():
+ if layer.supports("TIME"):
+ if layer.isTimeEnabled:
+ lyrTime = layer.time
+ startTime = lyrTime.startTime
+ endTime = lyrTime.endTime
+ timeDelta = endTime - startTime
+ startTimeField = lyrTime.startTimeField
+ endTimeField = lyrTime.endTimeField
+ arcpy.AddMessage(f"\tLayer: {layer.name}")
+ arcpy.AddMessage(f"\t\tStart Time Field: {startTimeField}")
+ arcpy.AddMessage(f"\t\tEnd Time Field: {endTimeField}")
+ arcpy.AddMessage(f"\t\tStart Time: {str(startTime.strftime('%m-%d-%Y'))}")
+ arcpy.AddMessage(f"\t\tEnd Time: {str(endTime.strftime('%m-%d-%Y'))}")
+ arcpy.AddMessage(f"\t\tTime Extent: {str(timeDelta.days)} days")
+ arcpy.AddMessage(f"\t\tTime Zone: {str(layer.time.timeZone)}")
+ del lyrTime, startTime, endTime, timeDelta
+ del startTimeField, endTimeField
+ else:
+ arcpy.AddMessage("No time properties have been set on the layer")
+ else:
+ arcpy.AddMessage("Time is not supported on this layer")
+ del layer
+ del layer_file
+ else:
+ arcpy.AddMessage("\tDataset does not have a time field")
+
+ layer_file = arcpy.mp.LayerFile(feature_class_layer_file)
+
+ # aprx.listBasemaps() to get a list of available basemaps
+ #
+ # ['Charted Territory Map',
+ # 'Colored Pencil Map',
+ # 'Community Map',
+ # 'Dark Gray Canvas',
+ # 'Firefly Imagery Hybrid',
+ # 'GEBCO Basemap (NOAA NCEI Visualization)',
+ # 'GEBCO Basemap/Contours (NOAA NCEI Visualization)',
+ # 'GEBCO Gray Basemap (NOAA NCEI Visualization)',
+ # 'GEBCO Gray Basemap/Contours (NOAA NCEI Visualization)',
+ # 'Human Geography Dark Map',
+ # 'Human Geography Map',
+ # 'Imagery',
+ # 'Imagery Hybrid',
+ # 'Light Gray Canvas',
+ # 'Mid-Century Map',
+ # 'Modern Antique Map',
+ # 'National Geographic Style Map',
+ # 'Navigation',
+ # 'Navigation (Dark)',
+ # 'Newspaper Map',
+ # 'NOAA Charts',
+ # 'NOAA ENC® Charts',
+ # 'Nova Map',
+ # 'Oceans',
+ # 'OpenStreetMap',
+ # 'Streets',
+ # 'Streets (Night)',
+ # 'Terrain with Labels',
+ # 'Topographic']
+
+ if aprx.listMaps(feature_service_title):
+ aprx.deleteItem(aprx.listMaps(feature_service_title)[0])
+ aprx.save()
+ else:
+ pass
+
+ arcpy.AddMessage(f"\tCreating Map: {feature_service_title}")
+ aprx.createMap(f"{feature_service_title}", "Map")
+ aprx.save()
+
+ current_map = aprx.listMaps(feature_service_title)[0]
+
+ basemap = "Terrain with Labels"
+ current_map.addLayer(layer_file)
+ current_map.addBasemap(basemap)
+ aprx.save()
+ del basemap
+
+ arcpy.AddMessage("\t\tCreate map thumbnail and update metadata")
+ current_map_view = current_map.defaultView
+ current_map_view.exportToPNG(
+ rf"{project_folder}\Layers\{feature_service_title}.png",
+ width=288,
+ height=192,
+ resolution=96,
+ color_mode="24-BIT_TRUE_COLOR",
+ embed_color_profile=True,
+ )
+ del current_map_view
+
+ fc_md = md.Metadata(feature_class_path)
+ fc_md.title = feature_service_title
+ fc_md.thumbnailUri = (rf"{project_folder}\Layers\{feature_service_title}.png")
+ fc_md.save()
+ fc_md.reload()
+ fc_md.saveAsXML(rf"{project_folder}\Metadata_Export\{feature_service_title}.xml")
+ del fc_md
+
+ parse_xml_file_format_and_save(csv_data_folder=csv_data_folder, xml_file=rf"{project_folder}\Metadata_Export\{feature_service_title}.xml", sort=True)
+ #parse_xml_file_format_and_save(csv_data_folder=csv_data_folder, xml_file="", sort=True)
+
+ in_md = md.Metadata(feature_class_path)
+ layer_file.metadata.copy(in_md)
+ layer_file.metadata.save()
+ layer_file.save()
+ current_map.metadata.copy(in_md)
+ current_map.metadata.save()
+ aprx.save()
+ del in_md
+
+ arcpy.AddMessage(f"\t\tLayer File Path: {layer_file.filePath}")
+ arcpy.AddMessage(f"\t\tLayer File Version: {layer_file.version}")
+ arcpy.AddMessage("\t\tLayer File Metadata:")
+ arcpy.AddMessage(f"\t\t\tLayer File Title: {layer_file.metadata.title}")
+ #arcpy.AddMessage(f"\t\t\tLayer File Tags: {layer_file.metadata.tags}")
+ #arcpy.AddMessage(f"\t\t\tLayer File Summary: {layer_file.metadata.summary}")
+ #arcpy.AddMessage(f"\t\t\tLayer File Description: {layer_file.metadata.description}")
+ #arcpy.AddMessage(f"\t\t\tLayer File Credits: {layer_file.metadata.credits}")
+ #arcpy.AddMessage(f"\t\t\tLayer File Access Constraints: {layer_file.metadata.accessConstraints}")
+
+ arcpy.AddMessage("\t\tList of layers or tables in Layer File:")
+ if current_map.listLayers(feature_service_title):
+ layer = current_map.listLayers(feature_service_title)[0]
+ elif current_map.listTables(feature_service_title):
+ layer = current_map.listTables(feature_service_title)[0]
+ else:
+ arcpy.AddWarning("Something wrong")
+
+ in_md = md.Metadata(feature_class_path)
+ layer.metadata.copy(in_md)
+ layer.metadata.save()
+ layer_file.save()
+ aprx.save()
+ del in_md
+
+ arcpy.AddMessage(f"\t\t\tLayer Name: {layer.name}")
+ arcpy.AddMessage("\t\t\tLayer Metadata:")
+ arcpy.AddMessage(f"\t\t\t\tLayer Title: {layer.metadata.title}")
+ #arcpy.AddMessage(f"\t\t\t\tLayer Tags: {layer.metadata.tags}")
+ #arcpy.AddMessage(f"\t\t\t\tLayer Summary: {layer.metadata.summary}")
+ #arcpy.AddMessage(f"\t\t\t\tLayer Description: {layer.metadata.description}")
+ #arcpy.AddMessage(f"\t\t\t\tLayer Credits: {layer.metadata.credits}")
+ #arcpy.AddMessage(f"\t\t\t\tLayer Access Constraints: {layer.metadata.accessConstraints}")
+ del layer
+ del layer_file
+ del feature_class_layer_file
+ del feature_class_path
+
+ aprx.deleteItem(current_map)
+ del current_map
+ aprx.save()
+
+ #del dataset_code, point_feature_type, feature_class_name, region, season
+ #del date_code, distribution_project_code
+ #del feature_class_path
+
+ del desc
+ del feature_service_title
+ del dataset
+
+ del datasets_dict
+ del datasets
+
+ # Declared Variables set in function
+ del aprx
+ del csv_data_folder, project_folder, project_name
+
+ # Imports
+ del dataset_title_dict, parse_xml_file_format_and_save, md
+
+ # Function Parameters
+ del project_gdb
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def create_feature_class_services(project_gdb=""):
+ try:
+ # Import
+ from arcpy import metadata as md
+ from dismap_tools import dataset_title_dict
+
+ # Test if passed workspace exists, if not sys.exit()
+ if not arcpy.Exists(project_gdb):
+ sys.exit()(f"{os.path.basename(project_gdb)} is missing!!")
+
+ # Set History and Metadata logs, set serverity and message level
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic workkpace variables
+ project_folder = os.path.dirname(project_gdb)
+ project_name = os.path.basename(project_folder)
+ csv_data_folder = rf"{project_folder}\CSV_Data"
+ scratch_folder = os.path.join(project_folder, "Scratch")
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
+
+ # Create Scratch Workspace for Project
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ if not arcpy.Exists(scratch_folder):
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", "scratch")
+
+ # Set basic workkpace variables
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ aprx = arcpy.mp.ArcGISProject(rf"{project_folder}\{project_name}.aprx")
+
+ del scratch_folder, scratch_workspace
+
+ arcpy.AddMessage("Loading the Dataset Title Dictionary. Please wait")
+ datasets_dict = dataset_title_dict(project_gdb)
+
+ datasets = []
+
+ #datasets.extend(arcpy.ListFeatureClasses("AI_IDW_Sample_Locations"))
+ datasets.extend(arcpy.ListFeatureClasses("*Sample_Locations"))
+ datasets.extend(arcpy.ListFeatureClasses("DisMAP_Regions"))
+ datasets.extend(arcpy.ListTables("Indicators"))
+ datasets.extend(arcpy.ListTables("Species_Filter"))
+ datasets.extend(arcpy.ListTables("DisMAP_Survey_Info"))
+ datasets.extend(arcpy.ListTables("SpeciesPersistenceIndicatorPercentileBin"))
+ datasets.extend(arcpy.ListTables("SpeciesPersistenceIndicatorTrend"))
+
+ #LogInAGOL = False
+ #if LogInAGOL:
+ #try:
+ #portal = "https://noaa.maps.arcgis.com/"
+ #user = "John.F.Kennedy_noaa"
+ # Sign in to portal
+ # arcpy.SignInToPortal("https://www.arcgis.com", "MyUserName", "MyPassword")
+ # For example: 'http://www.arcgis.com/'
+ #arcpy.SignInToPortal(portal)
+
+ #arcpy.AddMessage(f"###---> Signed into Portal: {arcpy.GetActivePortalURL()} <---###")
+ #del portal, user
+ #except:
+ #arcpy.AddError(f"###---> Signed into Portal faild <---###")
+ #del LogInAGOL
+
+ for dataset in sorted(datasets):
+
+ feature_service = datasets_dict[dataset]["Dataset Service"]
+ feature_service_title = datasets_dict[dataset]["Dataset Service Title"]
+
+ arcpy.AddMessage(f"Dataset: {dataset}")
+ arcpy.AddMessage(f"\tFS: {feature_service}")
+ arcpy.AddMessage(f"\tFST: {feature_service_title}")
+
+ feature_class_layer_file = rf"{project_folder}\Layers\{feature_service_title}.lyrx"
+
+ layer_file = arcpy.mp.LayerFile(feature_class_layer_file)
+
+ del feature_class_layer_file
+
+ # aprx.listBasemaps() to get a list of available basemaps
+ #
+ # ['Charted Territory Map',
+ # 'Colored Pencil Map',
+ # 'Community Map',
+ # 'Dark Gray Canvas',
+ # 'Firefly Imagery Hybrid',
+ # 'GEBCO Basemap (NOAA NCEI Visualization)',
+ # 'GEBCO Basemap/Contours (NOAA NCEI Visualization)',
+ # 'GEBCO Gray Basemap (NOAA NCEI Visualization)',
+ # 'GEBCO Gray Basemap/Contours (NOAA NCEI Visualization)',
+ # 'Human Geography Dark Map',
+ # 'Human Geography Map',
+ # 'Imagery',
+ # 'Imagery Hybrid',
+ # 'Light Gray Canvas',
+ # 'Mid-Century Map',
+ # 'Modern Antique Map',
+ # 'National Geographic Style Map',
+ # 'Navigation',
+ # 'Navigation (Dark)',
+ # 'Newspaper Map',
+ # 'NOAA Charts',
+ # 'NOAA ENC® Charts',
+ # 'Nova Map',
+ # 'Oceans',
+ # 'OpenStreetMap',
+ # 'Streets',
+ # 'Streets (Night)',
+ # 'Terrain with Labels',
+ # 'Topographic']
+
+ if aprx.listMaps(feature_service_title):
+ aprx.deleteItem(aprx.listMaps(feature_service_title)[0])
+ aprx.save()
+
+ arcpy.AddMessage(f"\tCreating Map: {feature_service_title}")
+ aprx.createMap(feature_service_title, "Map")
+ aprx.save()
+
+ current_map = aprx.listMaps(feature_service_title)[0]
+
+ in_md = md.Metadata(rf"{project_gdb}\{dataset}")
+ current_map.metadata.copy(in_md)
+ current_map.metadata.save()
+ aprx.save()
+ del in_md
+
+ current_map.addLayer(layer_file)
+ aprx.save()
+
+ del layer_file
+
+ arcpy.AddMessage("\t\tList of layers or tables in Layer File:")
+ if current_map.listLayers(feature_service_title):
+ lyr = current_map.listLayers(feature_service_title)[0]
+ elif current_map.listTables(feature_service_title):
+ lyr = current_map.listTables(feature_service_title)[0]
+ else:
+ arcpy.AddWarning("Something wrong")
+
+ in_md = md.Metadata(rf"{project_gdb}\{dataset}")
+ lyr.metadata.copy(in_md)
+ lyr.metadata.save()
+ aprx.save()
+ del in_md
+
+ arcpy.AddMessage("\tGet Web Layer Sharing Draft")
+ # Get Web Layer Sharing Draft
+ server_type = "HOSTING_SERVER" # FEDERATED_SERVER
+ # m.getWebLayerSharingDraft (server_type, service_type, service_name, {layers_and_tables})
+ # sddraft = m.getWebLayerSharingDraft(server_type, "FEATURE", service_name, [selected_layer, selected_table])
+ # https://pro.arcgis.com/en/pro-app/latest/arcpy/sharing/featuresharingdraft-class.htm#GUID-8E27A3ED-A705-4ACF-8C7D-AA861327AD26
+ sddraft = current_map.getWebLayerSharingDraft(server_type=server_type, service_type="FEATURE", service_name=feature_service, layers_and_tables=lyr)
+ del server_type
+
+ sddraft.allowExporting = False
+ sddraft.offline = False
+ sddraft.offlineTarget = None
+ sddraft.credits = lyr.metadata.credits
+ sddraft.description = lyr.metadata.description
+ sddraft.summary = lyr.metadata.summary
+ sddraft.tags = lyr.metadata.tags
+ sddraft.useLimitations = lyr.metadata.accessConstraints
+ sddraft.overwriteExistingService = True
+ sddraft.portalFolder = f"DisMAP {project_name}"
+
+ del lyr
+
+ arcpy.AddMessage(f"\t\tAllow Exporting: {sddraft.allowExporting}")
+ arcpy.AddMessage(f"\t\tCheck Unique ID Assignment: {sddraft.checkUniqueIDAssignment}")
+ arcpy.AddMessage(f"\t\tOffline: {sddraft.offline}")
+ arcpy.AddMessage(f"\t\tOffline Target: {sddraft.offlineTarget}")
+ arcpy.AddMessage(f"\t\tOverwrite Existing Service: {sddraft.overwriteExistingService}")
+ arcpy.AddMessage(f"\t\tPortal Folder: {sddraft.portalFolder}")
+ arcpy.AddMessage(f"\t\tServer Type: {sddraft.serverType}")
+ arcpy.AddMessage(f"\t\tService Name: {sddraft.serviceName}")
+ #arcpy.AddMessage(f"\t\tCredits: {sddraft.credits}")
+ #arcpy.AddMessage(f"\t\tDescription: {sddraft.description}")
+ #arcpy.AddMessage(f"\t\tSummary: {sddraft.summary}")
+ #arcpy.AddMessage(f"\t\tTags: {sddraft.tags}")
+ #arcpy.AddMessage(f"\t\tUse Limitations: {sddraft.useLimitations}")
+
+ arcpy.AddMessage("\tExport to SD Draft")
+ # Create Service Definition Draft file
+ sddraft.exportToSDDraft(rf"{project_folder}\Publish\{feature_service}.sddraft")
+
+ del sddraft
+
+ sd_draft = rf"{project_folder}\Publish\{feature_service}.sddraft"
+
+ arcpy.AddMessage("\tModify SD Draft")
+ # https://pro.arcgis.com/en/pro-app/latest/arcpy/sharing/featuresharingdraft-class.htm
+ # https://www.esri.com/arcgis-blog/products/arcgis-pro/mapping/streamline-your-code-with-new-properties-in-arcpy-sharing
+ import xml.dom.minidom as DOM
+
+ docs = DOM.parse(sd_draft)
+ key_list = docs.getElementsByTagName("Key")
+ value_list = docs.getElementsByTagName("Value")
+
+ for i in range(key_list.length):
+ if key_list[i].firstChild.nodeValue == "maxRecordCount":
+ arcpy.AddMessage("\t\tUpdating maxRecordCount from 2000 to 10000")
+ value_list[i].firstChild.nodeValue = 2000
+ if key_list[i].firstChild.nodeValue == "ServiceTitle":
+ arcpy.AddMessage(f"\t\tUpdating ServiceTitle from {value_list[i].firstChild.nodeValue} to {feature_service_title}")
+ value_list[i].firstChild.nodeValue = feature_service_title
+ # Doesn't work
+ #if key_list[i].firstChild.nodeValue == "GeodataServiceName":
+ # arcpy.AddMessage(f"\t\tUpdating GeodataServiceName from {value_list[i].firstChild.nodeValue} to {feature_service}")
+ # value_list[i].firstChild.nodeValue = feature_service
+ del i
+
+ # Write to the .sddraft file
+ f = open(sd_draft, "w")
+ docs.writexml(f)
+ f.close()
+ del f
+
+ del DOM, docs, key_list, value_list
+
+ FeatureSharingDraftReport = True
+ if FeatureSharingDraftReport:
+ arcpy.AddMessage(f"\tReport for {os.path.basename(sd_draft)} SD File")
+ feature_sharing_draft_report(sd_draft)
+ del FeatureSharingDraftReport
+
+ arcpy.AddMessage(f"\tCreate/Stage {os.path.basename(sd_draft)} SD File")
+ arcpy.server.StageService(in_service_definition_draft=sd_draft, out_service_definition=sd_draft.replace("sddraft", "sd"), staging_version=5)
+
+ UploadServiceDefinition = True
+ if UploadServiceDefinition:
+ #if project != "April 1 2023":
+ arcpy.AddMessage(f"\tUpload {os.path.basename(sd_draft).replace('sddraft', 'sd')} Service Definition")
+ arcpy.server.UploadServiceDefinition(
+ in_sd_file = sd_draft.replace("sddraft", "sd"),
+ in_server = "HOSTING_SERVER", # in_service_name = "", #in_cluster = "",
+ in_folder_type = "FROM_SERVICE_DEFINITION", # EXISTING #in_folder = "",
+ in_startupType = "STARTED",
+ in_override = "OVERRIDE_DEFINITION",
+ in_my_contents = "NO_SHARE_ONLINE",
+ in_public = "PRIVATE",
+ in_organization = "NO_SHARE_ORGANIZATION", # in_groups = ""
+ )
+ #else:
+ # arcpy.AddWarning(f"Project is {project}")
+ del UploadServiceDefinition
+
+ del sd_draft
+
+ #aprx.deleteItem(current_map)
+ del current_map
+ aprx.save()
+
+ del feature_service, feature_service_title
+ del dataset
+ del datasets
+ del datasets_dict
+
+ # TODO: Possibly create a dictionary that can be saved to JSON
+
+ aprx.save()
+
+ current_maps = aprx.listMaps()
+
+ if current_maps:
+ arcpy.AddMessage("\nCurrent Maps\n")
+ for current_map in current_maps:
+ arcpy.AddMessage(f"\tProject Map: {current_map.name}")
+ del current_map
+ else:
+ arcpy.AddWarning("No maps in Project")
+
+ del current_maps
+
+ # Declared Variables set in function for aprx
+
+ # Save aprx one more time and then delete
+ aprx.save()
+ del aprx
+
+ # Declared Variables set in function
+ del project_folder, project_name, csv_data_folder
+
+ # Imports
+ del dataset_title_dict, md
+
+ # Function Parameters
+ del project_gdb
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+##def update_metadata_from_published_md(project_gdb=""):
+## try:
+## # Import
+## import dismap_tools
+##
+## arcpy.env.overwriteOutput = True
+## arcpy.env.parallelProcessingFactor = "100%"
+## arcpy.SetLogMetadata(True)
+## arcpy.SetSeverityLevel(2)
+## arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+##
+## LogInAGOL = False
+## if LogInAGOL:
+## try:
+## portal = "https://noaa.maps.arcgis.com/"
+## user = "John.F.Kennedy_noaa"
+##
+## # Sign in to portal
+## #arcpy.SignInToPortal("https://www.arcgis.com", "MyUserName", "MyPassword")
+## # For example: 'http://www.arcgis.com/'
+## arcpy.SignInToPortal(portal)
+##
+## arcpy.AddMessage(f"###---> Signed into Portal: {arcpy.GetActivePortalURL()} <---###")
+## del portal, user
+## except:
+## arcpy.AddError(f"###---> Signed into Portal faild <---###")
+## del LogInAGOL
+##
+## aprx = arcpy.mp.ArcGISProject(base_project_file)
+## home_folder = aprx.homeFolder
+## del aprx
+##
+## project_gdb = rf"{project_folder}\{project}.gdb"
+##
+##
+##
+## # DatasetCode, CSVFile, TransformUnit, TableName, GeographicArea, CellSize,
+## # PointFeatureType, FeatureClassName, Region, Season, DateCode, Status,
+## # DistributionProjectCode, DistributionProjectName, SummaryProduct,
+## # FilterRegion, FilterSubRegion, FeatureServiceName, FeatureServiceTitle,
+## # MosaicName, MosaicTitle, ImageServiceName, ImageServiceTitle
+##
+## # Get values for table_name from Datasets table
+## #fields = ["FeatureClassName", "FeatureServiceName", "FeatureServiceTitle"]
+## fields = ["DatasetCode", "PointFeatureType", "FeatureClassName", "Region", "Season", "DateCode", "DistributionProjectCode"]
+## datasets = [row for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"), fields, where_clause = f"FeatureClassName IS NOT NULL AND DistributionProjectCode NOT IN ('GLMME', 'GFDL')")]
+## #datasets = [row for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"), fields, where_clause = f"FeatureClassName IN ('AI_IDW_Sample_Locations', 'DisMAP_Regions')")]
+## del fields
+##
+## for dataset in datasets:
+## dataset_code, point_feature_type, feature_class_name, region_latitude, season, date_code, distribution_project_code = dataset
+##
+## feature_service_name = f"{dataset_code}_{point_feature_type}_{date_code}".replace("None", "").replace(" ", "_").replace("__", "_")
+##
+## if distribution_project_code == "IDW":
+## feature_service_title = f"{region_latitude} {season} {point_feature_type} {date_code}".replace("None", "").replace(" ", " ")
+## #elif distribution_project_code in ["GLMME", "GFDL"]:
+## # feature_service_title = f"{region_latitude} {distribution_project_code} {point_feature_type} {date_code}".replace("None", "").replace(" ", " ")
+## else:
+## feature_service_title = f"{feature_service_name}".replace("_", " ")
+##
+## map_title = feature_service_title.replace("GRID Points", "").replace("Sample Locations", "").replace(" ", " ")
+##
+## feature_class_path = f"{project_gdb}\{feature_class_name}"
+##
+## arcpy.AddMessage(f"Dataset Code: {dataset_code}")
+## arcpy.AddMessage(f"\tFeature Service Name: {feature_service_name}")
+## arcpy.AddMessage(f"\tFeature Service Title: {feature_service_title}")
+## arcpy.AddMessage(f"\tMap Title: {map_title}")
+## arcpy.AddMessage(f"\tLayer Title: {feature_service_title}")
+## arcpy.AddMessage(f"\tFeature Class Name: {feature_class_name}")
+## arcpy.AddMessage(f"\tFeature Class Path: {feature_class_path}")
+##
+## if arcpy.Exists(rf"{project_folder}\Publish\{feature_service_name}.xml"):
+## arcpy.AddMessage(f"\t###--->>> {feature_service_name}.xml Exists <<<---###")
+##
+## from arcpy import metadata as md
+## in_md = md.Metadata(rf"{project_folder}\Publish\{feature_service_name}.xml")
+## fc_md = md.Metadata(feature_class_path)
+## fc_md.copy(in_md)
+## fc_md.save()
+## del in_md, fc_md
+## del md
+##
+## else:
+## arcpy.AddWarning(f"\t###--->>> {feature_service_name}.xml Does Not Exist <<<---###")
+##
+## del dataset_code, point_feature_type, feature_class_name, region_latitude, season
+## del date_code, distribution_project_code
+##
+## del feature_service_name, feature_service_title
+## del map_title, feature_class_path
+## del dataset
+## del datasets
+##
+## arcpy.AddMessage(f"\n{'-' * 90}\n")
+##
+## # Declared Variables set in function
+## del project_gdb
+## del home_folder
+##
+## # Imports
+## del dismap
+##
+## # Function Parameters
+## del base_project_file, project
+##
+## except SystemExit:
+## sys.exit()
+## except:
+## traceback.print_exc()
+## sys.exit()
+## else:
+## try:
+## leave_out_keys = ["leave_out_keys", "remaining_keys", "results"]
+## remaining_keys = [key for key in locals().keys() if not key.startswith('__') and key not in leave_out_keys]
+## if remaining_keys:
+## arcpy.AddWarning(f"Remaining Keys in '{inspect.stack()[0][3]}': ##--> '{', '.join(remaining_keys)}' <--## Line Number: {traceback.extract_stack()[-1].lineno}")
+## del leave_out_keys, remaining_keys
+##
+## return results if "results" in locals().keys() else ["NOTE!! The 'results' variable not yet set!!"]
+##
+## except:
+## traceback.print_exc()
+## finally:
+## try:
+## if "results" in locals().keys(): del results
+## except UnboundLocalError:
+## pass
+
+def create_image_services(project_gdb=""):
+ try:
+ # Import
+
+ # Set History and Metadata logs, set serverity and message level
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ #aprx = arcpy.mp.ArcGISProject(base_project_file) # noqa: F821
+ #home_folder = aprx.homeFolder
+ #project_gdb = rf"{project_folder}\{project}.gdb" # noqa: F821
+
+ # Set basic workkpace variables
+ project_folder = os.path.dirname(project_gdb)
+ crfs_folder = os.path.join(project_folder, "CRFs")
+ scratch_folder = os.path.join(project_folder, "Scratch")
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
+
+ # Create Scratch Workspace for Project
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ if not arcpy.Exists(scratch_folder):
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", "scratch")
+
+ # Set basic workkpace variables
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ del scratch_folder, scratch_workspace
+
+ arcpy.env.workspace = crfs_folder
+
+ for crf in arcpy.ListRasters("*"):
+ arcpy.AddMessage(crf)
+
+ arcpy.env.workspace = project_gdb
+
+## LogIntoPortal = False
+## if LogIntoPortal:
+## try:
+## portal = "https://noaa.maps.arcgis.com/"
+## user = "John.F.Kennedy_noaa"
+##
+## #portal = "https://maps.fisheries.noaa.gov/portal/home"
+## #portal = "https://maps.fisheries.noaa.gov"
+## #user = "John.F.Kennedy_noaa"
+##
+## # Sign in to portal
+## # arcpy.SignInToPortal("https://www.arcgis.com", "MyUserName", "MyPassword")
+## # For example: 'http://www.arcgis.com/'
+## arcpy.SignInToPortal(portal)
+##
+## arcpy.AddMessage(f"###---> Signed into Portal: {arcpy.GetActivePortalURL()} <---###")
+## del portal, user
+## except: # noqa: E722
+## arcpy.AddError("###---> Signed into Portal faild <---###")
+## sys.exit()
+## del LogIntoPortal
+
+ # Publishes an image service to a machine "myserver" from a folder of ortho images
+ # this code first author a mosaic dataset from the images, then publish it as an image service.
+ # A connection to ArcGIS Server must be established in the Catalog window of ArcMap
+ # before running this script
+
+ #import time
+ #import arceditor # this is required to create a mosaic dataset from images
+
+ #
+ # Define local variables:
+ #ImageSource=r"\\myserver\data\SourceData\Portland" # the folder of input images
+ #MyWorkspace=r"\\myserver\Data\DemoData\ArcPyPublishing" # the folder for mosaic dataset and the service defintion draft file
+ #GdbName="fgdb1.gdb"
+ #GDBpath = os.path.join(MyWorkspace,GdbName) #File geodatabase used to store a mosaic dataset
+ #Name = "OrthoImages"
+ #Md = os.path.join(GDBpath, Name)
+ #Sddraft = os.path.join(MyWorkspace,Name+".sddraft")
+ #Sd = os.path.join(MyWorkspace,Name+".sd")
+ #con = os.path.join(MyWorkspace, "arcgis on myserver_6080 (admin).ags")
+
+ con = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis\\image on maps.fisheries.noaa.gov.ags")
+
+ mosiac_name = "SEUS_FAL_IDW_Mosaic"
+ mosiac_path = rf"{project_gdb}\{mosiac_name}"
+ mosiac_sddraft = rf"{project_folder}\Publish\{mosiac_name}.sddraft"
+
+
+ # Create service definition draft
+ try:
+ arcpy.AddMessage("Creating SD draft")
+ #arcpy.CreateImageSDDraft(Md, Sddraft, Name, 'ARCGIS_SERVER', con, False, None, "Ortho Images","ortho images,image service")
+ arcpy.CreateImageSDDraft(mosiac_path, mosiac_sddraft, mosiac_name, 'ARCGIS_SERVER', con, False, None, "Biomass Rasters", "biomass rasters,image service")
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ import traceback
+ traceback.print_exc()
+ return False
+## # Analyze the service definition draft
+## analysis = arcpy.mapping.AnalyzeForSD(Sddraft)
+## arcpy.AddMessage("The following information was returned during analysis of the image service:")
+## for key in ('messages', 'warnings', 'errors'):
+## arcpy.AddMessage('----' + key.upper() + '---')
+## vars = analysis[key]
+## for ((message, code), layerlist) in vars.iteritems():
+## arcpy.AddMessage(' ', message, ' (CODE %i)' % code)
+## arcpy.AddMessage(' applies to:'),
+## for layer in layerlist:
+## arcpy.AddMessage(layer.name),
+## arcpy.AddMessage()
+##
+## # Stage and upload the service if the sddraft analysis did not contain errors
+## if analysis['errors'] == {}:
+## try:
+## arcpy.AddMessage("Adding data path to data store to avoid data copy")
+## arcpy.AddDataStoreItem(con, "FOLDER","Images", MyWorkspace, MyWorkspace)
+##
+## arcpy.AddMessage("Staging service to create service definition")
+## arcpy.StageService_server(Sddraft, Sd)
+##
+## arcpy.AddMessage("Uploading the service definition and publishing image service")
+## arcpy.UploadServiceDefinition_server(Sd, con)
+##
+## arcpy.AddMessage("Service successfully published")
+## except:
+## arcpy.AddError(arcpy.GetMessages()+ "\n\n")
+## sys.exit("Failed to stage and upload service")
+## else:
+## arcpy.AddError("Service could not be published because errors were found during analysis.")
+## arcpy.AddError(arcpy.GetMessages(2))
+
+ #del project_gdb
+
+ # Declared Variables set in function for aprx
+ #del home_folder
+ # Save aprx one more time and then delete
+ #aprx.save()
+ #del aprx
+
+ # Declared Variables set in function
+
+ # Imports
+
+ # Function Parameters
+ del project_gdb
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except Exception:
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def create_maps(project_gdb=""):
+ try:
+ # Import
+ from arcpy import metadata as md
+
+ from dismap_tools import dataset_title_dict
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2)
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+## # Map Cleanup
+## MapCleanup = False
+## if MapCleanup:
+## map_cleanup(base_project_file)
+## del MapCleanup
+
+ base_project_folder = rf"{os.path.dirname(base_project_file)}" # noqa: F821
+ base_project_file = rf"{base_project_folder}\DisMAP.aprx"
+ project_folder = rf"{base_project_folder}\{project}" # noqa: F821
+ project_gdb = rf"{project_folder}\{project}.gdb" # noqa: F821
+ metadata_folder = rf"{project_folder}\Export Metadata"
+ scratch_folder = rf"{project_folder}\Scratch"
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+
+ aprx = arcpy.mp.ArcGISProject(base_project_file)
+ home_folder = aprx.homeFolder
+
+ #arcpy.AddMessage(f"\n{'-' * 90}\n")
+
+ metadata_dictionary = dataset_title_dict(project_gdb)
+
+ datasets = list()
+
+ walk = arcpy.da.Walk(project_gdb)
+
+ for dirpath, dirnames, filenames in walk:
+ for filename in filenames:
+ datasets.append(os.path.join(dirpath, filename))
+ del filename
+ del dirpath, dirnames, filenames
+ del walk
+
+ for dataset_path in sorted(datasets):
+ arcpy.AddMessage(dataset_path)
+ dataset_name = os.path.basename(dataset_path)
+ data_type = arcpy.Describe(dataset_path).dataType
+ if data_type == "Table":
+ #arcpy.AddMessage(f"Dataset Name: {dataset_name}")
+ #arcpy.AddMessage(f"\tData Type: {data_type}")
+
+ if "IDW" in dataset_name:
+ arcpy.AddMessage(f"Dataset Name: {dataset_name}")
+ if "Indicators" in dataset_name:
+ arcpy.AddMessage("\tRegion Indicators")
+
+ elif "LayerSpeciesYearImageName" in dataset_name:
+ arcpy.AddMessage("\tRegion Layer Species Year Image Name")
+
+ else:
+ arcpy.AddMessage("\tRegion Table")
+
+ else:
+ arcpy.AddMessage(f"Dataset Name: {dataset_name}")
+ if "Indicators" in dataset_name:
+ arcpy.AddMessage("\tMain Indicators Table")
+
+ elif "LayerSpeciesYearImageName" in dataset_name:
+ arcpy.AddMessage("\tLayer Species Year Image Name")
+
+ elif "Datasets" in dataset_name:
+ arcpy.AddMessage("\tDataset Table")
+
+ elif "Species_Filter" in dataset_name:
+ arcpy.AddMessage("\tSpecies Filter Table")
+
+ else:
+ arcpy.AddMessage(f"\tDataset Name: {dataset_name}")
+
+ elif data_type == "FeatureClass":
+ #arcpy.AddMessage(f"\tData Type: {data_type}")
+
+ if "IDW" in dataset_name:
+ arcpy.AddMessage(f"Dataset Name: {dataset_name}")
+ if dataset_name.endswith("Boundary"):
+ arcpy.AddMessage("\tBoundary")
+
+ elif dataset_name.endswith("Extent_Points"):
+ arcpy.AddMessage("\tExtent_Points")
+
+ elif dataset_name.endswith("Fishnet"):
+ arcpy.AddMessage("\tFishnet")
+
+ elif dataset_name.endswith("Lat_Long"):
+ arcpy.AddMessage("\tLat_Long")
+
+ elif dataset_name.endswith("Region"):
+ arcpy.AddMessage("\tRegion")
+
+ elif dataset_name.endswith("Sample_Locations"):
+ arcpy.AddMessage("\tSample_Locations")
+
+ else:
+ pass
+
+ elif "DisMAP_Regions" == dataset_name:
+ arcpy.AddMessage(f"Dataset Name: {dataset_name}")
+ if dataset_name.endswith("Regions"):
+ arcpy.AddMessage("\tDisMAP Regions")
+
+ else:
+ arcpy.AddMessage(f"Else Dataset Name: {dataset_name}")
+
+ elif data_type == "RasterDataset":
+
+ if "IDW" in dataset_name:
+ arcpy.AddMessage(f"Dataset Name: {dataset_name}")
+ if dataset_name.endswith("Bathymetry"):
+ arcpy.AddMessage("\tBathymetry")
+
+ elif dataset_name.endswith("Latitude"):
+ arcpy.AddMessage("\tLatitude")
+
+ elif dataset_name.endswith("Longitude"):
+ arcpy.AddMessage("\tLongitude")
+
+ elif dataset_name.endswith("Raster_Mask"):
+ arcpy.AddMessage("\tRaster_Mask")
+ else:
+ pass
+
+ elif data_type == "MosaicDataset":
+
+ if "IDW" in dataset_name:
+ arcpy.AddMessage(f"Dataset Name: {dataset_name}")
+ if dataset_name.endswith("Mosaic"):
+ arcpy.AddMessage("\tMosaic")
+ else:
+ pass
+
+ elif "CRF" in dataset_name:
+ arcpy.AddMessage(f"Dataset Name: {dataset_name}")
+ if dataset_name.endswith("CRF"):
+ arcpy.AddMessage("\tCRF")
+
+ else:
+ pass
+ else:
+ pass
+
+ del data_type
+
+ del dataset_name, dataset_path
+ del datasets
+
+ # Declared Variables set in function for aprx
+ del home_folder
+ # Save aprx one more time and then delete
+ aprx.save()
+ del aprx
+
+ # Declared Variables set in function
+ del base_project_folder, metadata_folder
+ del project_folder, scratch_folder
+ del metadata_dictionary
+
+ # Imports
+ del dataset_title_dict, md
+
+ # Function Parameters
+ del project_gdb
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def script_tool(project_gdb=""):
+ try:
+ # Imports
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ arcpy.AddMessage(f"{'-' * 80}")
+ arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
+ arcpy.AddMessage(f"Python Version: {sys.version}")
+ arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"{'-' * 80}\n")
+
+ # Set varaibales
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{project_folder}\Scratch"
+ del project_folder
+
+ # Create project scratch workspace, if missing
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ if not arcpy.Exists(scratch_folder):
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", "scratch")
+ del scratch_folder
+
+ # Set basic arcpy.env variables
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ try:
+
+ CreateFeatureClassLayers = False
+ if CreateFeatureClassLayers:
+ create_feature_class_layers(project_gdb=project_gdb)
+ del CreateFeatureClassLayers
+
+ CreateFeaturClasseServices = False
+ if CreateFeaturClasseServices:
+ create_feature_class_services(project_gdb=project_gdb)
+ del CreateFeaturClasseServices
+
+ CreateImagesServices = True
+ if CreateImagesServices:
+ create_image_services(project_gdb=project_gdb)
+ del CreateImagesServices
+
+ # UpdateMetadataFromPublishedMd = False
+ # if UpdateMetadataFromPublishedMd:
+ # update_metadata_from_published_md(project_gdb=project_gdb)
+ # del UpdateMetadataFromPublishedMd
+
+ CreateMaps = False
+ if CreateMaps:
+ create_maps(project_gdb=project_gdb)
+ del CreateMaps
+
+## CreateBasicTemplateXMLFiles = False
+## if CreateBasicTemplateXMLFiles:
+## create_basic_template_xml_files(project_gdb=project_gdb)
+## del CreateBasicTemplateXMLFiles
+##
+## ImportBasicTemplateXmlFiles = False
+## if ImportBasicTemplateXmlFiles:
+## import_basic_template_xml_files(project_gdb=project_gdb)
+## del ImportBasicTemplateXmlFiles
+
+ except SystemExit:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ sys.exit()
+
+ # Variable created in function
+ #
+ # Function Parameters
+ del project_gdb
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ hours, rem = divmod(end_time-start_time, 3600)
+ minutes, seconds = divmod(rem, 60)
+ arcpy.AddMessage(f"\n{'-' * 80}")
+ arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
+ arcpy.AddMessage(f"End Time: {strftime('%a %b %d %I:%M %p', localtime(end_time))}")
+ arcpy.AddMessage(f"Elapsed Time {int(hours):0>2}:{int(minutes):0>2}:{seconds:05.2f} (H:M:S)")
+ arcpy.AddMessage(f"{'-' * 80}")
+ del hours, rem, minutes, seconds
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == '__main__':
+ try:
+ project_gdb = arcpy.GetParameterAsText(0)
+
+ if not project_gdb:
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
+ else:
+ pass
+
+ script_tool(project_gdb)
+
+ arcpy.SetParameterAsText(1, "Result")
+
+ del project_gdb
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/zip_and_unzip_csv_data.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/zip_and_unzip_csv_data.py
new file mode 100644
index 0000000..a63bdb9
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/zip_and_unzip_csv_data.py
@@ -0,0 +1,161 @@
+"""
+Script documentation
+- Tool parameters are accessed using arcpy.GetParameter() or
+ arcpy.GetParameterAsText()
+- Update derived parameter values using arcpy.SetParameter() or
+ arcpy.SetParameterAsText()
+"""
+import os
+import sys
+
+import arcpy
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def script_tool(project_folder, source_zip_file):
+ """Script code goes below"""
+ try:
+ from zipfile import ZipFile
+ from arcpy import metadata as md
+ from lxml import etree
+ from io import StringIO
+
+ aprx = arcpy.mp.ArcGISProject("CURRENT")
+ #aprx.save()
+ project_folder = aprx.homeFolder
+ arcpy.AddMessage(project_folder)
+ out_data_path = rf"{project_folder}\CSV_Data"
+ import json
+ json_path = rf"{out_data_path}\root_dict.json"
+ with open(json_path, "r") as json_file:
+ root_dict = json.load(json_file)
+ del json_file
+ del json_path
+ del json
+ arcpy.AddMessage(out_data_path)
+ # Change Directory
+ os.chdir(out_data_path)
+ arcpy.AddMessage(f"Un-Zipping files from {os.path.basename(source_zip_file)}")
+ with ZipFile(source_zip_file, mode="r") as archive:
+ for file in archive.namelist():
+ archive.extract(file, ".")
+ del file
+ del archive
+ arcpy.AddMessage(f"Done Un-Zipping files from {os.path.basename(source_zip_file)}")
+ tmp_workspace = arcpy.env.workspace
+ arcpy.env.workspace = rf"{out_data_path}\python"
+ csv_files = arcpy.ListFiles("*_survey.csv")
+ arcpy.AddMessage("Copying CSV Files and renameing the file")
+ for csv_file in csv_files:
+ arcpy.management.Copy(rf"{out_data_path}\python\{csv_file}", rf"{out_data_path}\{csv_file.replace('_survey', '_IDW')}")
+ del csv_file
+ del csv_files
+ arcpy.env.workspace = tmp_workspace
+ del tmp_workspace
+ if arcpy.Exists(rf"{out_data_path}\python"):
+ arcpy.AddMessage("Removing the extract folder")
+ arcpy.management.Delete(rf"{out_data_path}\python")
+ else:
+ pass
+ arcpy.AddMessage(f"Adding metadata to CSV file")
+ tmp_workspace = arcpy.env.workspace
+ arcpy.env.workspace = out_data_path
+ contacts = rf"{os.path.dirname(project_folder)}\Datasets\DisMAP Contacts 2025 08 01.xml"
+ csv_files = arcpy.ListFiles("*_IDW.csv")
+ for csv_file in csv_files:
+ arcpy.AddMessage(f"\t{csv_file}")
+ dataset_md = md.Metadata(rf"{out_data_path}\{csv_file}")
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ dataset_md.importMetadata(contacts, "ARCGIS_METADATA")
+ dataset_md.save()
+ dataset_md.synchronize("OVERWRITE")
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ target_tree = etree.parse(StringIO(dataset_md.xml), parser=etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root = target_tree.getroot()
+ target_root[:] = sorted(target_root, key=lambda x: root_dict[x.tag])
+ new_item_name = target_root.find("Esri/DataProperties/itemProps/itemName").text
+ arcpy.AddMessage(new_item_name)
+## onLineSrcs = target_root.findall("distInfo/distTranOps/onLineSrc")
+## #arcpy.AddMessage(onLineSrcs)
+## for onLineSrc in onLineSrcs:
+## if onLineSrc.find('./protocol').text == "ESRI REST Service":
+## old_linkage_element = onLineSrc.find('./linkage')
+## old_linkage = old_linkage_element.text
+## #arcpy.AddMessage(old_linkage)
+## old_item_name = old_linkage[old_linkage.find("/services/")+len("/services/"):old_linkage.find("/FeatureServer")]
+## new_linkage = old_linkage.replace(old_item_name, new_item_name)
+## #arcpy.AddMessage(new_linkage)
+## old_linkage_element.text = new_linkage
+## #arcpy.AddMessage(old_linkage_element.text)
+## del old_linkage_element
+## del old_item_name, old_linkage, new_linkage
+## onLineSrc.find('./orName').text = f"{new_item_name} Feature Service"
+## del onLineSrcs, new_item_name
+ etree.indent(target_root, space=' ')
+ dataset_md.xml = etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+
+ del dataset_md
+
+ del csv_file
+ del csv_files
+ arcpy.env.workspace = tmp_workspace
+ del tmp_workspace
+ del project_folder
+ del source_zip_file
+ del md
+ return out_data_path
+
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ except:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ else:
+ pass
+ finally:
+ pass
+ del out_data_path
+if __name__ == "__main__":
+ try:
+ project_folder = arcpy.GetParameterAsText(0)
+ if not project_folder:
+ project_folder = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026")
+ else:
+ pass
+
+ source_zip_file = arcpy.GetParameterAsText(1)
+ if not source_zip_file:
+ source_zip_file = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\Projects\\DisMAP\\ArcGIS-Analysis-Python\\Initial Data\\CSV Data 20260201.zip")
+ else:
+ pass
+
+ script_tool(project_folder, source_zip_file)
+
+ arcpy.SetParameterAsText(2, True)
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools/zip_and_unzip_shapefile_data.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools/zip_and_unzip_shapefile_data.py
new file mode 100644
index 0000000..dba199d
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools/zip_and_unzip_shapefile_data.py
@@ -0,0 +1,70 @@
+"""
+Script documentation
+- Tool parameters are accessed using arcpy.GetParameter() or
+ arcpy.GetParameterAsText()
+- Update derived parameter values using arcpy.SetParameter() or
+ arcpy.SetParameterAsText()
+"""
+import traceback
+
+import arcpy
+
+def script_tool(home_folder="", source_zip_file=""):
+ """Script code goes below"""
+ try:
+ import os
+ from zipfile import ZipFile
+ #aprx = arcpy.mp.ArcGISProject("CURRENT")
+ #aprx.save()
+ #home_folder = aprx.homeFolder
+ arcpy.AddMessage(home_folder)
+
+ out_data_path = rf"{home_folder}\Dataset_Shapefiles"
+ arcpy.AddMessage(out_data_path)
+
+ # Change Directory
+ os.chdir(out_data_path)
+ arcpy.AddMessage(f"Un-Zipping files from {os.path.basename(source_zip_file)}")
+ with ZipFile(source_zip_file, mode="r") as archive:
+ for file in archive.namelist():
+ archive.extract(file, ".")
+ del file
+ del archive
+
+ arcpy.AddMessage(f"Done Un-Zipping files from {os.path.basename(source_zip_file)}")
+
+ del home_folder
+ del source_zip_file
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == "__main__":
+ try:
+ home_folder = arcpy.GetParameterAsText(0)
+ source_zip_file = arcpy.GetParameterAsText(1)
+ script_tool(home_folder, source_zip_file)
+ arcpy.SetParameterAsText(2, "Result")
+ del home_folder, source_zip_file
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ except:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/dismap_project_version_setup.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools_dev/dev_dismap_director.py
similarity index 94%
rename from ArcGIS-Analysis-Python/src/dismap_tools/dismap_project_version_setup.py
rename to ArcGIS-Analysis-Python/Scripts/dismap_tools_dev/dev_dismap_director.py
index b704905..713e42a 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/dismap_project_version_setup.py
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools_dev/dev_dismap_director.py
@@ -9,24 +9,24 @@
# Copyright: (c) john.f.kennedy 2024
# Licence:
# -------------------------------------------------------------------------------
-import os
-import sys # built-ins first
+import os, sys # built-ins first
import traceback
+import importlib
import inspect
-import arcpy # third-parties second # noqa: F401
+import arcpy # third-parties second
def main(project_gdb=""):
try:
from time import gmtime, localtime, strftime, time
# Set a start time so that we can see how log things take
start_time = time()
- arcpy.AddMessage(f"{'-' * 80}")
- arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
- arcpy.AddMessage(f"Python Version: {sys.version}")
- arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
- arcpy.AddMessage(f"{'-' * 80}\n")
+ print(f"{'-' * 80}")
+ print(f"Python Script: {os.path.basename(__file__)}")
+ print(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ print(f"Python Version: {sys.version}")
+ print(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ print(f"{'-' * 80}\n")
# Set varaibales
project_folder = os.path.dirname(project_gdb)
@@ -418,21 +418,19 @@ def main(project_gdb=""):
# Elapsed time
end_time = time()
elapse_time = end_time - start_time
- arcpy.AddMessage(f"\n{'-' * 80}")
- arcpy.AddMessage(f"Python script: {os.path.basename(__file__)}\nCompleted: {strftime('%a %b %d %I:%M %p', localtime())}")
- arcpy.AddMessage(u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time))))
- arcpy.AddMessage(f"{'-' * 80}")
+ print(f"\n{'-' * 80}")
+ print(f"Python script: {os.path.basename(__file__)}\nCompleted: {strftime('%a %b %d %I:%M %p', localtime())}")
+ print(u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time))))
+ print(f"{'-' * 80}")
del elapse_time, end_time, start_time
del gmtime, localtime, strftime, time
- except: # noqa: E722
+ except:
traceback.print_exc()
raise SystemExit
else:
# While in development, leave here. For test, move to finally
rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk:
- arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
- del rk
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
finally:
pass
@@ -448,7 +446,7 @@ def main(project_gdb=""):
#project_name = "December 1 2024"
#project_name = "June 1 2025"
#for project_name in ["June 1 2025"]:
- for project_name in ["February 1 2026",]:
+ for project_name in ["December 1 2024", "June 1 2025"]:
project_folder = rf"{base_project_folder}"
project_gdb = rf"{project_folder}\{project_name}\{project_name}.gdb"
main(project_gdb=project_gdb)
@@ -458,7 +456,7 @@ def main(project_gdb=""):
# Imports
except SystemExit:
pass
- except: # noqa: E722
+ except:
traceback.print_exc()
else:
pass
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools_dev/dev_dismap_metadata_processing.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools_dev/dev_dismap_metadata_processing.py
new file mode 100644
index 0000000..0c6645b
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools_dev/dev_dismap_metadata_processing.py
@@ -0,0 +1,4381 @@
+# -*- coding: utf-8 -*-
+# -------------------------------------------------------------------------------
+# Name: module1
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 03/03/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+# -------------------------------------------------------------------------------
+import os, sys # built-ins first
+import traceback
+import importlib
+import inspect
+
+import arcpy # third-parties second
+
+def new_function():
+ try:
+ pass
+ # Declared Varaiables
+ # Imports
+ # Function Parameters
+ except KeyboardInterrupt:
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def date_code(version):
+ try:
+ from datetime import datetime
+ from time import strftime
+
+ _date_code = ""
+
+ if version.isdigit():
+ # The version value is 'YYYYMMDD' format (20230501)
+ # and is converted to 'Month Day and Year' (i.e. May 1 2023)
+ _date_code = datetime.strptime(version, "%Y%m%d").strftime("%B %#d %Y")
+ elif not version.isdigit():
+ # The version value is 'Month Day and Year' (i.e. May 1 2023)
+ # and is converted to 'YYYYMMDD' format (20230501)
+ _date_code = datetime.strptime(version, "%B %d %Y").strftime("%Y%m%d")
+ else:
+ _date_code = "error"
+ # Imports
+ del datetime, strftime
+ del version
+
+ import copy
+ __results = copy.deepcopy(_date_code)
+ del _date_code, copy
+ except KeyboardInterrupt:
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return __results
+ finally:
+ if "__results" in locals().keys(): del __results
+
+# #
+# Function: unique_years
+# Gets the unique years in a table
+# @param string table: The name of the layer
+# @return array: a sorted year array so we can go in order.
+# #
+def unique_years(table):
+ #print(table)
+ arcpy.management.SelectLayerByAttribute( table, "CLEAR_SELECTION" )
+ arcpy.management.SelectLayerByAttribute( table, "NEW_SELECTION", "Year IS NOT NULL")
+ with arcpy.da.SearchCursor(table, ["Year"]) as cursor:
+ return sorted({row[0] for row in cursor})
+
+def xml_tree_merge(source, target):
+ import copy
+ """Merge two xml trees A and B, so that each recursively found leaf element of B is added to A. If the element
+ already exists in A, it is replaced with B's version. Tree structure is created in A as required to reflect the
+ position of the leaf element in B.
+ Given and , a merge results in
+ (order not guaranteed)
+ """
+ def inner(aparent, bparent):
+ for bchild in bparent:
+ achild = aparent.xpath('./' + bchild.tag)
+ if not achild:
+ aparent.append(bchild)
+ elif bchild.getchildren():
+ inner(achild[0], bchild)
+
+ source_copy = copy.deepcopy(source)
+ inner(source_copy, target)
+ return source_copy
+
+def dataset_title_dict(project_gdb=""):
+ try:
+ if "Scratch" in project_gdb:
+ project = os.path.basename(os.path.dirname(os.path.dirname(project_gdb)))
+ else:
+ project = os.path.basename(os.path.dirname(project_gdb))
+
+ project_folder = os.path.dirname(project_gdb)
+ crf_folder = rf"{project_folder}\CRFs"
+ _credits = "These data were produced by NMFS OST."
+ access_constraints = "***No Warranty*** The user assumes the entire risk related to its use of these data. NMFS is providing these data 'as is' and NMFS disclaims any and all warranties, whether express or implied, including (without limitation) any implied warranties of merchantability or fitness for a particular purpose. No warranty expressed or implied is made regarding the accuracy or utility of the data on any other system or for general or scientific purposes, nor shall the act of distribution constitute any such warranty. It is strongly recommended that careful attention be paid to the contents of the metadata file associated with these data to evaluate dataset limitations, restrictions or intended use. In no event will NMFS be liable to you or to any third party for any direct, indirect, incidental, consequential, special or exemplary damages or lost profit resulting from any use or misuse of these data."
+
+ __datasets_dict = {}
+
+ dataset_codes = {row[0] : [row[1], row[2], row[3], row[4], row[5]] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"), ["DatasetCode", "PointFeatureType", "DistributionProjectCode", "FilterRegion", "FilterSubRegion", "Season"])}
+ for dataset_code in dataset_codes:
+ point_feature_type = dataset_codes[dataset_code][0] if dataset_codes[dataset_code][0] else ""
+ distribution_project_code = dataset_codes[dataset_code][1] if dataset_codes[dataset_code][1] else ""
+ filter_region = dataset_codes[dataset_code][2] if dataset_codes[dataset_code][2] else dataset_code.replace("_", " ")
+ filter_sub_region = dataset_codes[dataset_code][3] if dataset_codes[dataset_code][3] else dataset_code.replace("_", " ")
+ season = dataset_codes[dataset_code][4] if dataset_codes[dataset_code][4] else ""
+
+ tags = f"DisMAP; {filter_region}" if filter_region == filter_sub_region else f"DisMAP; {filter_region}; {filter_sub_region}"
+ tags = f"{tags}; {season}" if season else f"{tags}"
+ tags = f"{tags}; distribution; seasonal distribution; fish; invertebrates; climate change; fishery-independent surveys; ecological dynamics; oceans; biosphere; earth science; species/population interactions; aquatic sciences; fisheries; range changes"
+ summary = "These data were created as part of the DisMAP project to enable visualization and analysis of changes in fish and invertebrate distributions"
+
+ #print(f"Dateset Code: {dataset_code}")
+ if distribution_project_code:
+ if distribution_project_code == "IDW":
+
+ #table_name = f"{dataset_code}_{distribution_project_code}_TABLE"
+ table_name = f"{dataset_code}_{distribution_project_code}"
+ table_name_s = f"{table_name}_{date_code(project)}"
+ table_name_st = f"{filter_sub_region} {season} Table {date_code(project)}".replace(' ',' ')
+
+ #print(f"\tProcessing: {table_name}")
+
+ __datasets_dict[table_name] = {"Dataset Service" : table_name_s,
+ "Dataset Service Title" : table_name_st,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"This table represents the CSV Data files in ArcGIS format",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ del table_name, table_name_s, table_name_st
+
+ table_name = f"{dataset_code}_{distribution_project_code}"
+ sample_locations_fc = f"{table_name}_{point_feature_type.replace(' ', '_')}"
+ sample_locations_fcs = f"{table_name}_{point_feature_type.replace(' ', '_')}_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} {point_feature_type} {date_code(project)}"
+ sample_locations_fcst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ __datasets_dict[sample_locations_fc] = {"Dataset Service" : sample_locations_fcs,
+ "Dataset Service Title" : sample_locations_fcst,
+ "Tags" : tags,
+ "Summary" : f"{summary}. These layers provide information on the spatial extent/boundaries of the bottom trawl surveys. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Description" : f"This survey points layer provides information on both the locations where species are caught in several NOAA Fisheries surveys and the amount (i.e., biomass weight catch per unit effort, standardized to kg/ha) of each species that was caught at each location. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tSample Locations FC: {sample_locations_fc}")
+ #print(f"\tSample Locations FCS: {sample_locations_fcs}")
+ #print(f"\tSample Locations FST: {sample_locations_fcst}")
+
+ del table_name, sample_locations_fc, sample_locations_fcs, sample_locations_fcst
+
+ table_name = f"{dataset_code}"
+ sample_locations_fc = f"{table_name}_{point_feature_type.replace(' ', '_')}"
+ sample_locations_fcs = f"{table_name}_{point_feature_type.replace(' ', '_')}_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} {point_feature_type} {date_code(project)}"
+ sample_locations_fcst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ __datasets_dict[sample_locations_fc] = {"Dataset Service" : sample_locations_fcs,
+ "Dataset Service Title" : sample_locations_fcst,
+ "Tags" : tags,
+ "Summary" : f"{summary}. These layers provide information on the spatial extent/boundaries of the bottom trawl surveys. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Description" : f"This survey points layer provides information on both the locations where species are caught in several NOAA Fisheries surveys and the amount (i.e., biomass weight catch per unit effort, standardized to kg/ha) of each species that was caught at each location. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tSample Locations FC: {sample_locations_fc}")
+ #print(f"\tSample Locations FCS: {sample_locations_fcs}")
+ #print(f"\tSample Locations FST: {sample_locations_fcst}")
+
+ del table_name, sample_locations_fc, sample_locations_fcs, sample_locations_fcst
+
+
+ elif distribution_project_code != "IDW":
+
+ #table_name = f"{dataset_code}_TABLE"
+ table_name = f"{dataset_code}"
+ table_name_s = f"{table_name}_{date_code(project)}"
+ table_name_st = f"{filter_sub_region} {season} Table {date_code(project)}".replace(' ',' ')
+
+ #print(f"\tProcessing: {table_name}")
+
+ __datasets_dict[table_name] = {"Dataset Service" : table_name_s,
+ "Dataset Service Title" : table_name_st,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"This table represents the CSV Data files in ArcGIS format",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ del table_name, table_name_s, table_name_st
+
+ table_name = f"{dataset_code}"
+ grid_points_fc = f"{table_name}_{point_feature_type.replace(' ', '_')}"
+ grid_points_fcs = f"{table_name}_{point_feature_type.replace(' ', '_')}_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Sample Locations {date_code(project)}"
+ grid_points_fcst = f"{dataset_code.replace('_', ' ')} {point_feature_type} {date_code(project)}"
+
+ __datasets_dict[grid_points_fc] = {"Dataset Service" : grid_points_fcs,
+ "Dataset Service Title" : grid_points_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"This grid points layer provides information on model output amount (i.e., biomass weight catch per unit effort, standardized to kg/ha) of each species that was modeled at each location. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tGRID Points FC: {grid_points_fc}")
+ #print(f"\tGRID Points FCS: {grid_points_fcs}")
+ #print(f"\tGRID Points FCST: {grid_points_fcst}")
+
+ del table_name, grid_points_fc, grid_points_fcs, grid_points_fcst
+
+ dataset_code = f"{dataset_code}_{distribution_project_code}" if distribution_project_code not in dataset_code else dataset_code
+
+ # Bathymetry
+ bathymetry_r = f"{dataset_code}_Bathymetry"
+ bathymetry_rs = f"{dataset_code}_Bathymetry_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Bathymetry {date_code(project)}"
+ bathymetry_rst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {bathymetry_r}")
+
+ __datasets_dict[bathymetry_r] = {"Dataset Service" : bathymetry_rs,
+ "Dataset Service Title" : bathymetry_rst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"The bathymetry dataset represents the ocean depth at that grid cell.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tBathymetry R: {bathymetry_r}")
+ #print(f"\tBathymetry RS: {bathymetry_rs}")
+ #print(f"\tBathymetry RST: {bathymetry_rst}")
+
+ del bathymetry_r, bathymetry_rs, bathymetry_rst
+
+ # Boundary
+ boundary_fc = f"{dataset_code}_Boundary"
+ boundary_fcs = f"{dataset_code}_Boundary_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Boundary {date_code(project)}"
+ boundary_fcst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {boundary_fc}")
+
+ __datasets_dict[boundary_fc] = {"Dataset Service" : boundary_fcs,
+ "Dataset Service Title" : boundary_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"These files contain the spatial boundaries of the NOAA Fisheries Bottom-trawl surveys. This data set covers 8 regions of the United States: Northeast, Southeast, Gulf of Mexico, West Coast, Eastern Bering Sea, Aleutian Islands, Gulf of Alaska, and Hawai'i Islands.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tBoundary FC: {boundary_fc}")
+ #print(f"\tBoundary FCS: {boundary_fcs}")
+ #print(f"\tBoundary FCST: {boundary_fcst}")
+
+ del boundary_fc, boundary_fcs, boundary_fcst
+
+ # Boundary
+ boundary_line_fc = f"{dataset_code}_Boundary_Line"
+ boundary_line_fcs = f"{dataset_code}_Boundary_Line_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Boundary Line {date_code(project)}"
+ boundary_line_fcst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {boundary_line_fc}")
+
+ __datasets_dict[boundary_line_fc] = {"Dataset Service" : boundary_line_fcs,
+ "Dataset Service Title" : boundary_line_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"These files contain the spatial boundaries of the NOAA Fisheries Bottom-trawl surveys. This data set covers 8 regions of the United States: Northeast, Southeast, Gulf of Mexico, West Coast, Eastern Bering Sea, Aleutian Islands, Gulf of Alaska, and Hawai'i Islands.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tBoundary FC: {boundary_line_fc}")
+ #print(f"\tBoundary FCS: {boundary_line_fcs}")
+ #print(f"\tBoundary FCST: {boundary_line_fcst}")
+
+ del boundary_line_fc, boundary_line_fcs, boundary_line_fcst
+
+ # CRF
+ crf_r = f"{dataset_code}_CRF"
+ crf_rs = f"{dataset_code}_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} {dataset_code[dataset_code.rfind('_')+1:]} {date_code(project)}"
+ crf_rst = f"{feature_service_title.replace(' ',' ')}"
+ #del feature_service_title
+
+ #print(f"Processing: {crf_r}")
+ #print(f"\t{crf_rs}")
+ #print(f"\t{feature_service_title}")
+ #print(f"\t{crf_rst}")
+
+ __datasets_dict[crf_r] = {"Dataset Service" : crf_rs,
+ "Dataset Service Title" : crf_rst,
+ "Tags" : tags,
+ "Summary" : f"{summary}. These interpolated biomass layers provide information on the spatial distribution of species caught in the NOAA Fisheries fisheries-independent surveys. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Description" : f"NOAA Fisheries and its partners conduct fisheries-independent surveys in 8 regions in the US (Northeast, Southeast, Gulf of Mexico, West Coast, Gulf of Alaska, Bering Sea, Aleutian Islands, Hawai’i Islands). These surveys are designed to collect information on the seasonal distribution, relative abundance, and biodiversity of fish and invertebrate species found in U.S. waters. Over 400 species of fish and invertebrates have been identified in these surveys.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tCRF R: {crf_r}")
+ #print(f"\tCRF RS: {crf_rs}")
+ #print(f"\tCRF RST: {crf_rst}")
+
+ del crf_r, crf_rs, crf_rst
+
+ # Extent Points
+ extent_points_fc = f"{dataset_code}_Extent_Points"
+ extent_points_fcs = f"{dataset_code}_Extent_Points_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Extent Points {date_code(project)}"
+ extent_points_fcst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {extent_points_fc}")
+
+ __datasets_dict[extent_points_fc] = {"Dataset Service" : extent_points_fcs,
+ "Dataset Service Title" : extent_points_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"The Extent Points layer represents the extent of the model region.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tExtent Points FC: {extent_points_fc}")
+ #print(f"\tExtent Points FCS: {extent_points_fcs}")
+ #print(f"\tExtent Points FCST: {extent_points_fcst}")
+
+ del extent_points_fc, extent_points_fcs, extent_points_fcst
+
+ # Extent Points
+ points_fc = f"{dataset_code}_Points"
+ points_fcs = f"{dataset_code}_Points_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Extent Points {date_code(project)}"
+ points_fcst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {points_fc}")
+
+ __datasets_dict[points_fc] = {"Dataset Service" : points_fcs,
+ "Dataset Service Title" : points_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"The Points layer represents the extent of the model region.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tExtent Points FC: {points_fc}")
+ #print(f"\tExtent Points FCS: {points_fcs}")
+ #print(f"\tExtent Points FCST: {points_fcst}")
+
+ del points_fc, points_fcs, points_fcst
+
+ fishnet_fc = f"{dataset_code}_Fishnet"
+ fishnet_fcs = f"{dataset_code}_Fishnet_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Fishnet {date_code(project)}"
+ fishnet_fcst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {fishnet_fc}")
+
+ __datasets_dict[fishnet_fc] = {"Dataset Service" : fishnet_fcs,
+ "Dataset Service Title" : fishnet_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"The Fishnet is used to create the latitude and longitude rasters.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tFishnet FC: {fishnet_fc}")
+ #print(f"\tFishnet FCS: {fishnet_fcs}")
+ #print(f"\tFishnet FCST: {fishnet_fcst}")
+
+ del fishnet_fc, fishnet_fcs, fishnet_fcst
+
+ indicators_tb = f"{dataset_code}_Indicators"
+ indicators_tbs = f"{dataset_code}_Indicators_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Indicators Table {date_code(project)}"
+ indicators_tbst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {indicators_t}")
+
+ __datasets_dict[indicators_tb] = {"Dataset Service" : indicators_tbs,
+ "Dataset Service Title" : indicators_tbst,
+ "Tags" : tags,
+ "Summary" : f"{summary}. This table provides the key metrics used to evaluate a species distribution shift. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Description" : f"These data contain the key distribution metrics of center of gravity, range limits, and depth for each species in the portal. This data set covers 8 regions of the United States: Northeast, Southeast, Gulf of Mexico, West Coast, Bering Sea, Aleutian Islands, Gulf of Alaska, and Hawai'i Islands.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tIndicators T: {indicators_t}")
+ #print(f"\tIndicators TS: {indicators_ts}")
+ #print(f"\tIndicators TST: {indicators_tst}")
+
+ del indicators_tb, indicators_tbs, indicators_tbst
+
+ lat_long_fc = f"{dataset_code}_Lat_Long"
+ lat_long_fcs = f"{dataset_code}_Lat_Long_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Lat Long {date_code(project)}"
+ lat_long_fcst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {lat_long_fc}")
+
+ __datasets_dict[lat_long_fc] = {"Dataset Service" : lat_long_fcs,
+ "Dataset Service Title" : lat_long_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"The lat_long layer is used to get the latitude & longitude values to create these rasters",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tLat Long FC: {lat_long_fc}")
+ #print(f"\tLat Long FCS: {lat_long_fcs}")
+ #print(f"\tLat Long FCST: {lat_long_fcst}")
+
+ del lat_long_fc, lat_long_fcs, lat_long_fcst
+
+ latitude_r = f"{dataset_code}_Latitude"
+ latitude_rs = f"{dataset_code}_Latitude_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Latitude {date_code(project)}"
+ latitude_rst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {latitude_r}")
+
+ __datasets_dict[latitude_r] = {"Dataset Service" : latitude_rs,
+ "Dataset Service Title" : latitude_rst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"The Latitude raster",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tLatitude R: {latitude_r}")
+ #print(f"\tLatitude RS: {latitude_rs}")
+ #print(f"\tLatitude RST: {latitude_rst}")
+
+ del latitude_r, latitude_rs, latitude_rst
+
+ layer_species_year_image_name_tb = f"{dataset_code}_LayerSpeciesYearImageName"
+ layer_species_year_image_name_tbs = f"{dataset_code}_LayerSpeciesYearImageName_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Layer Species Year Image Name Table {date_code(project)}"
+ layer_species_year_image_name_tbst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {layer_species_year_image_name_tb}")
+
+ __datasets_dict[layer_species_year_image_name_tb] = {"Dataset Service" : layer_species_year_image_name_tbs,
+ "Dataset Service Title" : layer_species_year_image_name_tbst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"Layer Species Year Image Name Table",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tLayerSpeciesYearImageName T: {layer_species_year_image_name_tb}")
+ #print(f"\tLayerSpeciesYearImageName TS: {layer_species_year_image_name_tbs}")
+ #print(f"\tLayerSpeciesYearImageName TST: {layer_species_year_image_name_tbst}")
+
+ del layer_species_year_image_name_tb, layer_species_year_image_name_tbs, layer_species_year_image_name_tbst
+
+ longitude_r = f"{dataset_code}_Longitude"
+ longitude_rs = f"{dataset_code}_Longitude_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Longitude {date_code(project)}"
+ longitude_rst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {longitude_r}")
+
+ __datasets_dict[longitude_r] = {"Dataset Service" : longitude_rs,
+ "Dataset Service Title" : longitude_rst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"The Longitude raster",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tLongitude R: {longitude_r}")
+ #print(f"\tLongitude RS: {longitude_rs}")
+ #print(f"\tLongitude RST: {longitude_rst}")
+
+ del longitude_r, longitude_rs, longitude_rst
+
+ mosaic_r = f"{dataset_code}_Mosaic"
+ mosaic_rs = f"{dataset_code}_Mosaic_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} {dataset_code[dataset_code.rfind('_')+1:]} Mosaic {date_code(project)}"
+ mosaic_rst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {mosaic_r}")
+
+ __datasets_dict[mosaic_r] = {"Dataset Service" : mosaic_rs,
+ "Dataset Service Title" : mosaic_rst,
+ #"Tags" : _tags,
+ "Tags" : tags,
+ "Summary" : f"{summary}. These interpolated biomass layers provide information on the spatial distribution of species caught in the NOAA Fisheries fisheries-independent surveys. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Description" : f"NOAA Fisheries and its partners conduct fisheries-independent surveys in 8 regions in the US (Northeast, Southeast, Gulf of Mexico, West Coast, Gulf of Alaska, Bering Sea, Aleutian Islands, Hawai’i Islands). These surveys are designed to collect information on the seasonal distribution, relative abundance, and biodiversity of fish and invertebrate species found in U.S. waters. Over 400 species of fish and invertebrates have been identified in these surveys.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tMosaic R: {mosaic_r}")
+ #print(f"\tMosaic RS: {mosaic_rs}")
+ #print(f"\tMosaic RST: {mosaic_rst}")
+
+ del mosaic_r, mosaic_rs, mosaic_rst
+
+ crf_r = f"{dataset_code}.crf"
+ crf_rs = f"{dataset_code}_CRF_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} {dataset_code[dataset_code.rfind('_')+1:]} C {date_code(project)}"
+ crf_rst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {mosaic_r}")
+
+ __datasets_dict[crf_r] = {"Dataset Service" : crf_rs,
+ "Dataset Service Title" : crf_rst,
+ #"Tags" : _tags,
+ "Tags" : tags,
+ "Summary" : f"{summary}. These interpolated biomass layers provide information on the spatial distribution of species caught in the NOAA Fisheries fisheries-independent surveys. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Description" : f"NOAA Fisheries and its partners conduct fisheries-independent surveys in 8 regions in the US (Northeast, Southeast, Gulf of Mexico, West Coast, Gulf of Alaska, Bering Sea, Aleutian Islands, Hawai’i Islands). These surveys are designed to collect information on the seasonal distribution, relative abundance, and biodiversity of fish and invertebrate species found in U.S. waters. Over 400 species of fish and invertebrates have been identified in these surveys.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tCFR R: {crf_r}")
+ #print(f"\tCFR RS: {crf_rs}")
+ #print(f"\tCFR RST: {crf_rst}")
+
+ del crf_r, crf_rs, crf_rst
+
+ raster_mask_r = f"{dataset_code}_Raster_Mask"
+ raster_mask_rs = f"{dataset_code}_Raster_Mask_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Raster Mask {date_code(project)}"
+ raster_mask_rst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {raster_mask_r}")
+
+ __datasets_dict[raster_mask_r] = {"Dataset Service" : raster_mask_rs,
+ "Dataset Service Title" : raster_mask_rst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"Raster Mask is used for image production",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tRaster_Mask R: {raster_mask_r}")
+ #print(f"\tRaster_Mask RS: {raster_mask_rs}")
+ #print(f"\tRaster_Mask RST: {raster_mask_rst}")
+
+ del raster_mask_r, raster_mask_rs, raster_mask_rst
+
+ region_fc = f"{dataset_code}_Region"
+ region_fcs = f"{dataset_code}_Region_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Region {date_code(project)}"
+ region_fcst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {region_fc}")
+
+ __datasets_dict[region_fc] = {"Dataset Service" : region_fcs,
+ "Dataset Service Title" : region_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"These files contain the spatial boundaries of the NOAA Fisheries Bottom-trawl surveys. This data set covers 8 regions of the United States: Northeast, Southeast, Gulf of Mexico, West Coast, Bering Sea, Aleutian Islands, Gulf of Alaska, and Hawai'i Islands.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tRegion FC: {region_fc}")
+ #print(f"\tRegion FCS: {region_fcs}")
+ #print(f"\tRegion FCST: {region_fcst}")
+
+ del region_fc, region_fcs, region_fcst
+
+ survey_area_fc = f"{dataset_code}_Survey_Area"
+ survey_area_fcs = f"{dataset_code}_Region_{date_code(project)}"
+ feature_service_title = f"{filter_sub_region} {season} Region {date_code(project)}"
+ survey_area_fcst = f"{feature_service_title.replace(' ',' ')}"
+ del feature_service_title
+
+ #print(f"\tProcessing: {survey_area_fc}")
+
+ __datasets_dict[survey_area_fc] = {"Dataset Service" : survey_area_fcs,
+ "Dataset Service Title" : survey_area_fcst,
+ "Tags" : tags,
+ "Summary" : summary,
+ "Description" : f"These files contain the spatial boundaries of the NOAA Fisheries Bottom-trawl surveys. This data set covers 8 regions of the United States: Northeast, Southeast, Gulf of Mexico, West Coast, Bering Sea, Aleutian Islands, Gulf of Alaska, and Hawai'i Islands.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tRegion FC: {survey_area_fc}")
+ #print(f"\tRegion FCS: {survey_area_fcs}")
+ #print(f"\tRegion FCST: {survey_area_fcst}")
+
+ del survey_area_fc, survey_area_fcs, survey_area_fcst
+
+
+ del tags
+
+ if not distribution_project_code:
+
+ if "Datasets" == dataset_code:
+
+ #print(f"\tProcessing: Datasets")
+
+ datasets_tb = dataset_code
+ datasets_tbs = f"{dataset_code}_{date_code(project)}"
+ datasets_tbst = f"{dataset_code} {date_code(project)}"
+
+ __datasets_dict[datasets_tb] = {"Dataset Service" : datasets_tbs,
+ "Dataset Service Title" : datasets_tbst,
+ "Tags" : "DisMAP, Datasets",
+ "Summary" : summary,
+ "Description" : "This table functions as a look-up table of vales",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ del datasets_tb, datasets_tbs, datasets_tbst
+
+ elif "DisMAP_Regions" == dataset_code:
+
+ #print(f"\tProcessing: DisMAP_Regions")
+
+ regions_fc = dataset_code
+ regions_fcs = f"{dataset_code}_{date_code(project)}"
+ regions_fcst = f"DisMAP Regions {date_code(project)}"
+
+ __datasets_dict[regions_fc] = {"Dataset Service" : regions_fcs,
+ "Dataset Service Title" : regions_fcst,
+ "Tags" : "DisMAP Regions",
+ "Summary" : summary,
+ "Description" : "These files contain the spatial boundaries of the NOAA Fisheries Bottom-trawl surveys. This data set covers 8 regions of the United States: Northeast, Southeast, Gulf of Mexico, West Coast, Eastern Bering Sea, Aleutian Islands, Gulf of Alaska, and Hawai'i Islands.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ del regions_fc, regions_fcs, regions_fcst
+
+ elif "Indicators" == dataset_code:
+
+ #print(f"\tProcessing: Indicators")
+
+ indicators_tb = f"{dataset_code}"
+ indicators_tbs = f"{dataset_code}_{date_code(project)}"
+ indicators_tbst = f"{dataset_code} {date_code(project)}"
+
+ __datasets_dict[indicators_tb] = {"Dataset Service" : indicators_tbs,
+ "Dataset Service Title" : indicators_tbst,
+ "Tags" : "DisMAP, Indicators",
+ "Summary" : f"{summary}. This table provides the key metrics used to evaluate a species distribution shift. Information on species distributions is of paramount importance for understanding and preparing for climate-change impacts, and plays a key role in climate-ready fisheries management.",
+ "Description" : f"These data contain the key distribution metrics of center of gravity, range limits, and depth for each species in the portal. This data set covers 8 regions of the United States: Northeast, Southeast, Gulf of Mexico, West Coast, Bering Sea, Aleutian Islands, Gulf of Alaska, and Hawai'i Islands.",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ del indicators_tb, indicators_tbs, indicators_tbst
+
+ elif "LayerSpeciesYearImageName" == dataset_code:
+
+ #print(f"\tProcessing: LayerSpeciesYearImageName")
+
+ layer_species_year_image_name_tb = dataset_code
+ layer_species_year_image_name_tbs = f"{dataset_code}_{date_code(project)}"
+ layer_species_year_image_name_tbst = f"Layer Species Year Image Name Table {date_code(project)}"
+
+ #print(f"\tProcessing: {layer_species_year_image_name_tb}")
+
+ __datasets_dict[layer_species_year_image_name_tb] = {"Dataset Service" : layer_species_year_image_name_tbs,
+ "Dataset Service Title" : layer_species_year_image_name_tbst,
+ "Tags" : "DisMAP, Layer Species Year Image Name Table",
+ "Summary" : summary,
+ "Description" : "This table functions as a look-up table of values",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tLayerSpeciesYearImageName T: {layer_species_year_image_name_tb}")
+ #print(f"\tLayerSpeciesYearImageName TS: {layer_species_year_image_name_tbs}")
+ #print(f"\tLayerSpeciesYearImageName TST: {layer_species_year_image_name_tbst}")
+
+ del layer_species_year_image_name_tb, layer_species_year_image_name_tbs, layer_species_year_image_name_tbst
+
+ elif "Species_Filter" == dataset_code:
+
+ #print(f"\tProcessing: Species_Filter")
+
+ species_filter_tb = dataset_code
+ species_filter_tbs = f"{dataset_code}_{date_code(project)}"
+ species_filter_tbst = f"Species Filter Table {date_code(project)}"
+
+ __datasets_dict[species_filter_tb] = {"Dataset Service" : species_filter_tbs,
+ "Dataset Service Title" : species_filter_tbst,
+ "Tags" : "DisMAP, Species Filter Table",
+ "Summary" : summary,
+ "Description" : "This table functions as a look-up table of values",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tLayerSpeciesYearImageName T: {species_filter_tb}")
+ #print(f"\tLayerSpeciesYearImageName TS: {species_filter_tbs}")
+ #print(f"\tLayerSpeciesYearImageName TST: {species_filter_tbst}")
+
+ del species_filter_tb, species_filter_tbs, species_filter_tbst
+
+ elif "DisMAP_Survey_Info" == dataset_code:
+
+ #print(f"\tProcessing: DisMAP_Survey_Info")
+
+ tb = dataset_code
+ tbs = f"{dataset_code}_{date_code(project)}"
+ tbst = f"DisMAP Survey Info Table {date_code(project)}"
+
+ __datasets_dict[tb] = {"Dataset Service" : tbs,
+ "Dataset Service Title" : tbst,
+ "Tags" : "DisMAP; DisMAP Survey Info Table",
+ "Summary" : summary,
+ "Description" : "This table functions as a look-up table of values",
+ "Credits" : _credits,
+ "Access Constraints" : access_constraints}
+
+ #print(f"\tLayerSpeciesYearImageName T: {tb}")
+ #print(f"\tLayerSpeciesYearImageName TS: {tbs}")
+ #print(f"\tLayerSpeciesYearImageName TST: {tbst}")
+
+ del tb, tbs, tbst
+
+ else:
+ #print(f"\tProcessing: {dataset_code}")
+
+ #table = dataset_code
+ #table_s = f"{dataset_code}_{date_code(project)}"
+ #table_st = f"{table_s.replace('_',' ')} {date_code(project)}"
+ #print(f"\tProcessing: {table_s}")
+ #__datasets_dict[table] = {"Dataset Service" : table_s,
+ # "Dataset Service Title" : table_st,
+ # "Tags" : f"DisMAP, {table}",
+ # "Summary" : summary,
+ # "Description" : "Unknown table",
+ # "Credits" : _credits,
+ # "Access Constraints" : access_constraints}
+
+ #print(f"\tTable: {table}")
+ #print(f"\tTable TS: {table_s}")
+ #print(f"\tTable TST: {table_st}")
+
+ #del table, table_s, table_st
+
+ raise Exception(f"{dataset_code} is missing")
+
+ else:
+ pass
+
+ del summary
+ del point_feature_type, distribution_project_code
+ del filter_region, filter_sub_region, season
+ del dataset_code
+
+ del _credits, access_constraints
+
+ del dataset_codes
+ del project_folder, crf_folder
+ del project, project_gdb
+ except KeyboardInterrupt:
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return __datasets_dict
+ finally:
+ if "__datasets_dict" in locals().keys(): del __datasets_dict
+
+def import_basic_template_xml(dataset_path=""):
+ try:
+ # Import
+ from lxml import etree
+ from io import StringIO, BytesIO
+ from arcpy import metadata as md
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ project_gdb = os.path.dirname(dataset_path)
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{project_folder}\Scratch"
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+
+ import json
+ json_path = rf"{project_folder}\root_dict.json"
+ with open(json_path, "r") as json_file:
+ root_dict = json.load(json_file)
+ del json_file
+ del json_path
+ del json
+
+## #print("Creating the Metadata Dictionary. Please wait!!")
+## metadata_dictionary = dataset_title_dict(project_gdb)
+## #print("Creating the Metadata Dictionary. Completed")
+
+ #print(dataset_path)
+ dataset_name = os.path.basename(dataset_path)
+
+ print(f"Dataset: {dataset_name}")
+
+## xml_file = '''
+##
+##
+##
+##
+## Timothy J Haverland
+##
+##
+## tim.haverland@noaa.gov
+##
+##
+## https://www.fisheries.noaa.gov/about/office-science-and-technology
+##
+##
+##
+##
+##
+##
+##
+##
+## Melissa Ann Karp
+##
+##
+## melissa.karp@noaa.gov
+##
+##
+## https://www.fisheries.noaa.gov/about/office-science-and-technology
+##
+##
+##
+##
+##
+##
+##
+##
+##
+##
+## NMFS Office of Science and Technology
+##
+##
+## tim.haverland@noaa.gov
+##
+##
+## https://www.fisheries.noaa.gov/about/office-science-and-technology
+##
+##
+##
+##
+##
+##
+##
+##
+##
+## John F Kennedy
+##
+##
+## john.f.kennedy@noaa.gov
+##
+##
+## https://www.fisheries.noaa.gov/about/office-science-and-technology
+##
+##
+##
+##
+##
+##
+## '''
+
+## xml_file = '''
+##
+##
+## John F Kennedy
+##
+##
+## john.f.kennedy@noaa.gov
+##
+##
+## https://www.fisheries.noaa.gov/about/office-science-and-technology
+##
+##
+##
+##
+##
+##
+## '''
+
+## # Parse the XML
+## dataset_md = md.Metadata(dataset_path)
+## #dataset_md.synchronize('ALWAYS')
+## #dataset_md.save()
+## in_md = md.Metadata(rf"C:\Users\john.f.kennedy\Documents\ArcGIS\Projects\DisMap\ArcGIS-Analysis-Python\December 1 2024\Export\WC_TRI_IDW.xml")
+## dataset_md.copy(in_md)
+## #dataset_md.importMetadata(rf"C:\Users\john.f.kennedy\Documents\ArcGIS\Projects\DisMap\ArcGIS-Analysis-Python\December 1 2024\Export\WC_TRI_IDW.xml", "ARCGIS_METADATA")
+## #dataset_md.importMetadata(xml_file, "ARCGIS_METADATA")
+## dataset_md.save()
+## dataset_md.synchronize("OVERWRITE")
+## dataset_md.save()
+## #dataset_md.reload()
+## #dataset_md.synchronize("ALWAYS")
+## #dataset_md.save()
+## #dataset_md.reload()
+## del in_md
+## del dataset_md
+## del xml_file
+##
+## tags = "DisMap;"
+## summary = "These data were created as part of the DisMAP project to enable visualization and analysis of changes in fish and invertebrate distributions"
+## description = ""
+## project_credits = "These data were produced by NMFS OST."
+## access_constraints = "***No Warranty*** The user assumes the entire risk related to its use of these data. NMFS is providing these data 'as is' and NMFS disclaims any and all warranties, whether express or implied, including (without limitation) any implied warranties of merchantability or fitness for a particular purpose. No warranty expressed or implied is made regarding the accuracy or utility of the data on any other system or for general or scientific purposes, nor shall the act of distribution constitute any such warranty. It is strongly recommended that careful attention be paid to the contents of the metadata file associated with these data to evaluate dataset limitations, restrictions or intended use. In no event will NMFS be liable to you or to any third party for any direct, indirect, incidental, consequential, special or exemplary damages or lost profit resulting from any use or misuse of these data."
+##
+## dataset_md = md.Metadata(dataset_path)
+## dataset_md.title = f"{dataset_name.replace('_', ' ')}"
+## dataset_md.tags = f"{tags}{dataset_name.replace('_', ' ')};"
+## dataset_md.summary = summary
+## dataset_md.description = f"{description}{dataset_name.replace('_', ' ')}"
+## dataset_md.credits = project_credits
+## dataset_md.accessConstraints = access_constraints
+## dataset_md.save()
+## dataset_md.synchronize("ALWAYS")
+## dataset_md.save()
+## del dataset_md
+##
+## del tags, summary, description, project_credits, access_constraints
+
+
+ # Option #1
+ #empty_md = md.Metadata()
+ #dataset_md = md.Metadata(dataset_path)
+ #dataset_md.copy(empty_md)
+ #dataset_md.save()
+ #del empty_md
+ # Option #2
+ #empty_md = md.Metadata(xml_file)
+ #dataset_md = md.Metadata(dataset_path)
+ #dataset_md.copy(empty_md)
+ #dataset_md.save()
+ #del empty_md
+ # Option #3
+ #
+ #dataset_md = md.Metadata(dataset_path)
+ #dataset_md.copy(in_md)
+ #dataset_md.save()
+ #del in_md
+ # Option #4
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.importMetadata(rf"{project_folder}\metadata_template.xml")
+ dataset_md.save()
+ del dataset_md
+
+ tags = "DisMap;"
+ summary = "These data were created as part of the DisMAP project to enable visualization and analysis of changes in fish and invertebrate distributions"
+ description = ""
+ project_credits = "These data were produced by NMFS OST."
+ access_constraints = "***No Warranty*** The user assumes the entire risk related to its use of these data. NMFS is providing these data 'as is' and NMFS disclaims any and all warranties, whether express or implied, including (without limitation) any implied warranties of merchantability or fitness for a particular purpose. No warranty expressed or implied is made regarding the accuracy or utility of the data on any other system or for general or scientific purposes, nor shall the act of distribution constitute any such warranty. It is strongly recommended that careful attention be paid to the contents of the metadata file associated with these data to evaluate dataset limitations, restrictions or intended use. In no event will NMFS be liable to you or to any third party for any direct, indirect, incidental, consequential, special or exemplary damages or lost profit resulting from any use or misuse of these data."
+
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.title = f"{dataset_name.replace('_', ' ')}"
+ dataset_md.tags = f"{tags}{dataset_name.replace('_', ' ')};"
+ dataset_md.summary = summary
+ dataset_md.description = f"{description}{dataset_name.replace('_', ' ')}"
+ dataset_md.credits = project_credits
+ dataset_md.accessConstraints = access_constraints
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ dataset_md.reload()
+ export_folder = rf"{os.path.dirname(os.path.dirname(dataset_path))}\Export"
+ dataset_md.saveAsXML(rf"{export_folder}\{os.path.basename(dataset_path)}.xml", "REMOVE_ALL_SENSITIVE_INFO")
+ # To parse from a string, use the fromstring() function instead.
+ _tree = etree.parse(rf"{export_folder}\{os.path.basename(dataset_path)}.xml", parser=etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ _root = _tree.getroot()
+ _root[:] = sorted(_root, key=lambda x: root_dict[x.tag])
+ del _root
+ etree.indent(_tree, space='\t')
+ _tree.write(rf"{export_folder}\{os.path.basename(dataset_path)}.xml", encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+ del _tree
+ del export_folder
+ del dataset_md
+
+ del tags, summary, description, project_credits, access_constraints
+
+
+ # Parse the XML
+ dataset_md = md.Metadata(dataset_path)
+ parser = etree.XMLParser(encoding='UTF-8', remove_blank_text=True)
+ target_tree = etree.parse(StringIO(dataset_md.xml), parser=parser)
+ #target_tree = etree.parse(xml_file, parser=parser)
+ target_root = target_tree.getroot()
+ target_root[:] = sorted(target_root, key=lambda x: root_dict[x.tag])
+ etree.indent(target_tree, space='\t')
+ print(etree.tostring(target_tree, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ del parser, dataset_md
+
+ del target_tree, target_root
+
+
+
+## _tree = etree.parse(BytesIO(xml_file), etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## _root = _tree.getroot()
+## distributor = target_root.xpath(f"./distInfo/distributor")
+## if len(distributor) == 0:
+## target_root.xpath(f"./distInfo")[0].insert(distInfo_dict["distributor"], _root)
+## elif len(distributor) == 1:
+## distributor[0].getparent().replace(distributor[0], _root)
+## else:
+## pass
+## del _root, _tree, xml_file
+## #print(f"\n\t{etree.tostring(target_root.xpath(f'./distInfo/distributor')[0], encoding='UTF-8', method='xml', pretty_print=True).decode()}\n")
+## del distributor
+
+
+## mdFileID = target_root.xpath(f"//mdFileID")
+## if mdFileID is not None and len(mdFileID) == 0:
+## _xml = 'gov.noaa.nmfs.inport:'
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## target_root.insert(root_dict['mdFileID'], _root)
+## del _root, _xml
+## elif mdFileID is not None and len(mdFileID) and len(mdFileID[0]) == 0:
+## mdFileID[0].text = "gov.noaa.nmfs.inport:"
+## elif mdFileID is not None and len(mdFileID) and len(mdFileID[0]) == 1:
+## pass
+## #print(etree.tostring(mdFileID[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del mdFileID
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## mdMaint = target_root.xpath(f"//mdMaint")
+## if mdMaint is not None and len(mdMaint) == 0:
+## _xml = ''
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## target_root.insert(root_dict['mdMaint'], _root)
+## del _root, _xml
+## elif mdMaint is not None and len(mdMaint) and len(mdMaint[0]) == 0:
+## target_root.xpath("./mdMaint/maintFreq/MaintFreqCd")[0].attrib["value"] = "009"
+## elif mdMaint is not None and len(mdMaint) and len(mdMaint[0]) == 1:
+## pass #print(etree.tostring(mdMaint[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## else:
+## pass
+## #print(etree.tostring(mdMaint[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del mdMaint
+##
+## # No changes needed below
+## #print(etree.tostring(target_tree, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## etree.indent(target_root, space=' ')
+## dataset_md_xml = etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+##
+## SaveBackXml = False
+## if SaveBackXml:
+## dataset_md = md.Metadata(dataset_path)
+## dataset_md.xml = dataset_md_xml
+## dataset_md.save()
+## dataset_md.synchronize("ALWAYS")
+## dataset_md.save()
+## #dataset_md.reload()
+## del dataset_md
+## else:
+## pass
+## del SaveBackXml
+## del dataset_md_xml
+
+ # Declared Variables
+ del root_dict
+ #del target_tree, target_root
+ #del metadata_dictionary,
+ del dataset_name
+ del project_gdb, project_folder, scratch_folder
+ # Imports
+ del md
+ del etree, StringIO, BytesIO
+ # Function Parameters
+ del dataset_path
+ except KeyboardInterrupt:
+ raise SystemExit
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ raise SystemExit
+ except arcpy.ExecuteError:
+ #traceback.print_exc()
+ arcpy.AddError(arcpy.GetMessages(2))
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ raise SystemExit
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def update_eainfo_xml_elements(dataset_path=""):
+ try:
+ # Imports
+ from lxml import etree
+ from io import StringIO, BytesIO
+ #import copy
+ from arcpy import metadata as md
+ # Project modules
+ #from src.project_tools import pretty_format_xml_file
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ project_gdb = os.path.dirname(dataset_path)
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{project_folder}\Scratch"
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+
+ import json
+ json_path = rf"{project_folder}\root_dict.json"
+ with open(json_path, "r") as json_file:
+ root_dict = json.load(json_file)
+ del json_file
+ del json_path
+ del json
+
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ dataset_md.reload()
+ dataset_md_xml = dataset_md.xml
+ del dataset_md
+
+ # Parse the XML
+ parser = etree.XMLParser(encoding='UTF-8', remove_blank_text=True)
+ target_tree = etree.parse(StringIO(dataset_md_xml), parser=parser)
+ target_root = target_tree.getroot()
+ del parser, dataset_md_xml
+
+ dataset_name = os.path.basename(dataset_path)
+ print(f"Processing Entity Attributes for dataset: '{dataset_name}'")
+
+ # Root
+ mdTimeSt = target_root.find("./mdTimeSt")
+ #print(mdTimeSt)
+ if mdTimeSt is not None:
+ mdTimeSt.getparent().remove(mdTimeSt)
+ else:
+ pass
+ del mdTimeSt
+
+ enttyp = target_root.find("enttyp")
+ if enttyp is not None:
+ enttypd = enttyp.find("enttypd")
+ enttypds = enttyp.find("enttypds")
+ if enttypd is None:
+ _xml = "A collection of geographic features with the same geometry type."
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ enttyp.insert(0, _root)
+ del _root, _xml
+ else:
+ pass
+ if enttypds is None:
+ _xml = "Esri"
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ enttyp.insert(0, _root)
+ del _root, _xml
+ else:
+ pass
+ del enttypds, enttypd
+ else:
+ pass
+ del enttyp
+
+ # Create a list of fields using the ListFields function
+ fields = [f for f in arcpy.ListFields(dataset_path) if f.type not in ["Geometry", "OID"] and f.name not in ["Shape_Area", "Shape_Length"]]
+ for field in fields:
+ attributes = target_root.xpath(f".//attrlabl[text()='{field.name}']/..")
+ if attributes is not None and len(attributes) > 0:
+ for attribute in attributes:
+ #print(attribute)
+ #print(etree.tostring(attribute, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ attrdef = attribute.find("./attrdef/..")
+ if attrdef is None:
+ _xml = f"Definition for: {field.name}"
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ attribute.insert(7, _root)
+ del _root, _xml
+ else:
+ pass
+ attrdefs = attribute.find("./attrdefs/..")
+ if attrdefs is None:
+ _xml = "NMFS OST DisMAP 2025"
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ attribute.insert(8, _root)
+ del _root, _xml
+ else:
+ pass
+ attrdomv = attribute.find("./attrdomv/..")
+ if attrdomv is None:
+ _xml = "None"
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ attribute.insert(8, _root)
+ del _root, _xml
+ else:
+ pass
+ del attrdef, attrdefs, attrdomv
+ del attribute
+ else:
+ pass
+ del attributes
+ del field
+
+ attributes = target_root.xpath(f".//attr")
+ for attribute in attributes:
+ #print(etree.tostring(attribute, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ del attribute
+ del attributes
+ del fields
+
+ # Metadata
+ target_root[:] = sorted(target_root, key=lambda x: root_dict[x.tag])
+
+ # No changes needed below
+ #print(etree.tostring(target_tree, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ etree.indent(target_tree, space=' ')
+ dataset_md_xml = etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+
+ SaveBackXml = True
+ if SaveBackXml:
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.xml = dataset_md_xml
+ dataset_md.save()
+ dataset_md.synchronize("CREATED")
+ dataset_md.save()
+ #_target_tree = etree.parse(StringIO(dataset_md.xml), parser=etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ #_target_tree.write(rf"{export_folder}\{dataset_name}.xml", pretty_print=True)
+ #print(etree.tostring(_target_tree.find("./eainfo"), encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ #del _target_tree
+ del dataset_md
+ else:
+ pass
+ del SaveBackXml
+ del dataset_md_xml
+
+ # Declared Varaiables
+ del dataset_name
+ del target_tree, target_root
+ del project_gdb, project_folder, scratch_folder, root_dict
+ # Imports
+ del md, etree, StringIO, BytesIO
+ # Function Parameters
+ del dataset_path
+ except KeyboardInterrupt:
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def insert_missing_elements(dataset_path):
+ try:
+ from lxml import etree
+ from arcpy import metadata as md
+ from io import BytesIO, StringIO
+ import copy
+
+ project_gdb = os.path.dirname(dataset_path)
+ project_folder = os.path.dirname(project_gdb)
+ project = os.path.basename(os.path.dirname(project_gdb))
+ export_folder = rf"{project_folder}\Export"
+ scratch_folder = rf"{project_folder}\Scratch"
+
+ import json
+ json_path = rf"{project_folder}\root_dict.json"
+ with open(json_path, "r") as json_file:
+ root_dict = json.load(json_file)
+ json_path = rf"{project_folder}\esri_dict.json"
+ with open(json_path, "r") as json_file:
+ esri_dict = json.load(json_file)
+ json_path = rf"{project_folder}\dataIdInfo_dict.json"
+ with open(json_path, "r") as json_file:
+ dataIdInfo_dict = json.load(json_file)
+ json_path = rf"{project_folder}\contact_dict.json"
+ with open(json_path, "r") as json_file:
+ contact_dict = json.load(json_file)
+ json_path = rf"{project_folder}\dqInfo_dict.json"
+ with open(json_path, "r") as json_file:
+ dqInfo_dict = json.load(json_file)
+ json_path = rf"{project_folder}\distInfo_dict.json"
+ with open(json_path, "r") as json_file:
+ distInfo_dict = json.load(json_file)
+ #json_path = rf"{project_folder}\RoleCd_dict.json"
+ #with open(json_path, "r") as json_file:
+ # RoleCd_dict = json.load(json_file)
+ #json_path = rf"{project_folder}\tpCat_dict.json"
+ #with open(json_path, "r") as json_file:
+ # tpCat_dict = json.load(json_file)
+ del json_file
+ del json_path
+ del json
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+ del scratch_folder
+ del project_folder
+ del project_gdb
+
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ # Get contact information
+ contacts_xml = rf"{os.environ['USERPROFILE']}\Documents\ArcGIS\Descriptions\contacts.xml"
+ contacts_xml_tree = etree.parse(contacts_xml, parser=etree.XMLParser(encoding='UTF-8', remove_blank_text=True)) # To parse from a string, use the fromstring() function instead.
+ del contacts_xml
+
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ # Get Prefered contact information
+ mdContact_rpIndName = contact_dict["mdContact"][0]["rpIndName"]
+ mdContact_eMailAdd = contact_dict["mdContact"][0]["eMailAdd"]
+ mdContact_role = contact_dict["mdContact"][0]["role"]
+
+ citRespParty_rpIndName = contact_dict["citRespParty"][0]["rpIndName"]
+ citRespParty_eMailAdd = contact_dict["citRespParty"][0]["eMailAdd"]
+ citRespParty_role = contact_dict["citRespParty"][0]["role"]
+
+ idPoC_rpIndName = contact_dict["idPoC"][0]["rpIndName"]
+ idPoC_eMailAdd = contact_dict["idPoC"][0]["eMailAdd"]
+ idPoC_role = contact_dict["idPoC"][0]["role"]
+
+ distorCont_rpIndName = contact_dict["distorCont"][0]["rpIndName"]
+ distorCont_eMailAdd = contact_dict["distorCont"][0]["eMailAdd"]
+ distorCont_role = contact_dict["distorCont"][0]["role"]
+
+ srcCitatn_rpIndName = contact_dict["srcCitatn"][0]["rpIndName"]
+ srcCitatn_eMailAdd = contact_dict["srcCitatn"][0]["eMailAdd"]
+ srcCitatn_role = contact_dict["srcCitatn"][0]["role"]
+
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ #print(etree.tostring(target_root, encoding='UTF-8', method='xml', pretty_print=True).decode())
+
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ dataset_md.reload()
+ dataset_md_xml = dataset_md.xml
+ del dataset_md
+
+ # Parse the XML
+ parser = etree.XMLParser(encoding='UTF-8', remove_blank_text=True)
+ target_tree = etree.parse(StringIO(dataset_md_xml), parser=parser)
+ target_root = target_tree.getroot()
+ del parser, dataset_md_xml
+
+ CreaDate = target_root.xpath(f"//Esri/CreaDate")[0].text
+ CreaTime = target_root.xpath(f"//Esri/CreaTime")[0].text
+ #print(CreaDate, CreaTime)
+ CreaDateTime = f"{CreaDate[:4]}-{CreaDate[4:6]}-{CreaDate[6:]}T{CreaTime[:2]}:{CreaTime[2:4]}:{CreaTime[4:6]}"
+ #print(f"\tCreaDateTime: {CreaDateTime}")
+ #del CreaDateTime
+ del CreaDate, CreaTime
+ ModDate = target_root.xpath(f"//Esri/ModDate")[0].text
+ ModTime = target_root.xpath(f"//Esri/ModTime")[0].text
+ #print(ModDate, ModTime)
+ ModDateTime = f"{ModDate[:4]}-{ModDate[4:6]}-{ModDate[6:]}T{ModTime[:2]}:{ModTime[2:4]}:{ModTime[4:6]}"
+ #print(f"\tModDateTime: {ModDateTime}")
+ #del ModDateTime
+ del ModDate, ModTime
+
+ dataset_name = os.path.basename(dataset_path)
+ print(f"Processing/Updating elements for dataset: '{dataset_name}'")
+
+ xml_file = b'''
+
+
+ ISO 19139 Metadata Implementation Specification GML3.2
+ ISO19139
+
+
+
+
+
+
+ feature class name
+ feature class name
+ NMFS OST DisMAP
+
+
+
+
+
+
+
+
+
+
+ external
+ 810ad1c47a347c4bf5c88f2ea5077cd5d7a1bdcb
+ Timothy J Haverland
+ NMFS Office of Science and Technology
+ GIS App Developer
+
+
+ 1315 East West Highway
+ Silver Spring
+ MD
+ 20910-3282
+ tim.haverland@noaa.gov
+
+
+ 301-427-8137
+ 301-713-4137
+
+ 0700 - 1800 EST/EDT
+
+ https://www.fisheries.noaa.gov/about/office-science-and-technology
+ REST Service
+ NMFS Office of Science and Technology
+ NOAA Fisheries Office of Science and Technology
+
+
+
+
+
+ Timothy J Haverland
+ True
+
+
+
+
+
+
+
+
+ Global Change Master Directory (GCMD) Science Keywords
+
+
+
+
+
+
+ https://www.fisheries.noaa.gov/inport/help/components/keywords
+ REST Service
+ GCMD
+ Global Change Master Directory (GCMD) Science Keywords
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Global Change Master Directory (GCMD) Location Keywords
+
+
+
+
+
+
+ https://www.fisheries.noaa.gov/inport/help/components/keywords
+ REST Service
+ GCMD
+ Global Change Master Directory (GCMD) Location Keywords
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Global Change Master Directory (GCMD) Temporal Data Resolution Keywords
+
+
+
+
+
+
+ https://www.fisheries.noaa.gov/inport/help/components/keywords
+ REST Service
+ GCMD
+ Global Change Master Directory (GCMD) Temporal Data Resolution Keywords
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Integrated Taxonomic Information System (ITIS)
+
+
+
+
+
+
+ https://www.itits.org
+ REST Service
+ ITIS
+ Integrated Taxonomic Information System (ITIS)
+
+
+
+
+
+
+
+
+
+
+ external
+ c66ffbb333c48d18d81856ec0e0c37ea752bff1a
+ Melissa Ann Karp
+ NMFS Office of Science and Technology
+ Fisheries Science Coordinator
+
+
+ 1315 East West Hwy
+ Silver Spring
+ MD
+ 20910-3282
+ melissa.karp@noaa.gov
+ US
+
+
+ 301-427-8202
+ 301-713-4137
+
+ 0700 - 1800 EST/EDT
+
+ https://www.fisheries.noaa.gov/about/office-science-and-technology
+ REST Service
+ NMFS Office of Science and Technology
+ NOAA Fisheries Office of Science and Technology
+
+
+
+
+
+ Melissa Ann Karp
+ True
+
+
+
+
+
+
+
+
+
+
+
+
+ Data License: CC0-1.0
+ Data License URL: https://creativecommons.org/publicdomain/zero/1.0/
+ Data License Statement: These data were produced by NOAA and are not subject to copyright protection in the United States. NOAA waives any potential copyright and related rights in these data worldwide through the Creative Commons Zero 1.0 Universal Public Domain Dedication (CC0-1.0).
+
+
+
+
+
+ FISMA Low
+
+
+ <DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>***No Warranty*** The user assumes the entire risk related to its use of these data. NMFS is providing these data 'as is' and NMFS disclaims any and all warranties, whether express or implied, including (without limitation) any implied warranties of merchantability or fitness for a particular purpose. No warranty expressed or implied is made regarding the accuracy or utility of the data on any other system or for general or scientific purposes, nor shall the act of distribution constitute any such warranty. It is strongly recommended that careful attention be paid to the contents of the metadata file associated with these data to evaluate dataset limitations, restrictions or intended use. In no event will NMFS be liable to you or to any third party for any direct, indirect, incidental, consequential, special or exemplary damages or lost profit resulting from any use or misuse of these data.</SPAN></P></DIV></DIV></DIV>
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ dataset
+
+
+
+ Based on a review from DisMAP Team all necessary features are present.
+
+
+
+ Conceptual Consistency Report
+
+ NMFS OST DisMAP
+
+
+
+
+
+
+ Based on a review from DisMAP Team all necessary features are present.
+ 1
+
+
+
+
+ Based on a review from DisMAP Team all necessary features are present.
+
+
+
+ Completeness Report
+
+ NMFS OST DisMAP
+
+
+
+
+
+
+ Based on a review from DisMAP Team all necessary features are present.
+ 1
+
+
+
+
+
+
+ Data for 'region' 'version'
+
+
+
+
+ Source Citation for:
+
+ NMFS OST DisMAP
+
+ https://www.fisheries.noaa.gov/about/office-science-and-technology
+ REST Service
+ NMFS Office of Science and Technology
+ NOAA Fisheries Office of Science and Technology
+
+
+
+
+
+
+
+
+
+
+ document
+
+
+
+ external
+ c66ffbb333c48d18d81856ec0e0c37ea752bff1a
+ Melissa Ann Karp
+ NMFS Office of Science and Technology
+ Fisheries Science Coordinator
+
+
+ 1315 East West Hwy
+ Silver Spring
+ MD
+ 20910-3282
+ melissa.karp@noaa.gov
+ US
+
+
+ 301-427-8202
+ 301-713-4137
+
+ 0700 - 1800 EST/EDT
+
+ https://www.fisheries.noaa.gov/about/office-science-and-technology
+ REST Service
+ NMFS Office of Science and Technology
+ NOAA Fisheries Office of Science and Technology
+
+
+
+
+
+ Melissa Ann Karp
+ True
+
+
+
+
+
+
+
+ Geoprocessing Steps for 'region' 'version date'
+
+
+ external
+ c66ffbb333c48d18d81856ec0e0c37ea752bff1a
+ Melissa Ann Karp
+ NMFS Office of Science and Technology
+ Fisheries Science Coordinator
+
+
+ 1315 East West Hwy
+ Silver Spring
+ MD
+ 20910-3282
+ melissa.karp@noaa.gov
+ US
+
+
+ 301-427-8202
+ 301-713-4137
+
+ 0700 - 1800 EST/EDT
+
+ https://www.fisheries.noaa.gov/about/office-science-and-technology
+ REST Service
+ NMFS Office of Science and Technology
+ NOAA Fisheries Office of Science and Technology
+
+
+
+
+
+ Melissa Ann Karp
+ True
+
+
+
+
+
+ external
+ b212accd6134b5457de3ed1debca061419d927ce
+ John F Kennedy
+ NMFS Office of Science and Technology
+ GIS Specialist
+
+
+ 1315 East West Highway
+ Silver Spring
+ MD
+ 20910-3282
+ US
+ john.f.kennedy@noaa.gov
+
+
+ 301-427-8149
+ 301-713-4137
+
+ 0930 - 2030 EST/EDT
+
+ https://www.fisheries.noaa.gov/about/office-science-and-technology
+ REST Service
+ NMFS Office of Science and Technology
+ NOAA Fisheries Office of Science and Technology
+
+
+
+
+
+ John F Kennedy
+ True
+
+
+
+
+
+ DisMAP
+
+
+
+
+
+
+
+ ESRI REST Service
+
+ Uncompressed
+
+
+
+
+ external
+ 579ce2e21b888ac8f6ac1dac30f04cddec7a0d7c
+ NMFS Office of Science and Technology
+ NMFS Office of Science and Technology
+ GIS App Developer
+
+
+ 1315 East West Highway
+ Silver Spring
+ MD
+ 20910-3282
+ US
+ tim.haverland@noaa.gov
+
+
+ 301-427-8137
+ 301-713-4137
+
+ 0700 - 1800 EST/EDT
+
+ https://www.fisheries.noaa.gov/about/office-science-and-technology
+ REST Service
+ NMFS Office of Science and Technology
+ NOAA Fisheries Office of Science and Technology
+
+
+
+
+
+ NMFS Office of Science and Technology (Distributor)
+ True
+
+
+
+
+
+
+ MB
+ 8
+
+ https://services2.arcgis.com/C8EMgrsFcRFL6LrL/arcgis/rest/services/.../FeatureServer
+ ESRI REST Service
+ NMFS Office of Science and Technology
+ Dataset Feature Service
+
+
+
+
+
+
+
+ external
+ b212accd6134b5457de3ed1debca061419d927ce
+ John F Kennedy
+ NMFS Office of Science and Technology
+ GIS Specialist
+
+
+ 1315 East West Highway
+ Silver Spring
+ MD
+ 20910-3282
+ US
+ john.f.kennedy@noaa.gov
+
+
+ 301-427-8149
+ 301-713-4137
+
+ 0930 - 2030 EST/EDT
+
+ https://www.fisheries.noaa.gov/about/office-science-and-technology
+ REST Service
+ NMFS Office of Science and Technology
+ NOAA Fisheries Office of Science and Technology
+
+
+
+
+
+ John F Kennedy (Metadata Author)
+ True
+
+
+
+
+ <.>
+ '''
+
+ source_tree = etree.parse(BytesIO(xml_file), etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ source_root = source_tree.getroot()
+
+ # Merge Target wtih Source
+ target_source_merge = xml_tree_merge(target_root, source_root)
+ #print(etree.tostring(target_source_merge, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ # Merge Source wtih Target
+ source_target_merge = xml_tree_merge(target_source_merge, target_root)
+ #print(etree.tostring(source_target_merge, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ del target_source_merge
+ del source_tree, source_root, xml_file
+
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ #
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ dataset_md_xml = etree.tostring(source_target_merge, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=False)
+
+ SaveBackXml = False
+ if SaveBackXml:
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.xml = dataset_md_xml
+ dataset_md.save()
+ dataset_md.synchronize("CREATED")
+ dataset_md.save()
+ #dataset_md.reload()
+ #dataset_md_xml = dataset_md.xml
+ del dataset_md
+ # Parse the XML
+ #_target_tree = etree.parse(StringIO(dataset_md_xml), parser=etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ #del dataset_md_xml
+ #print(etree.tostring(_target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ #del _target_tree
+ else:
+ pass
+ del SaveBackXml
+ del dataset_md_xml
+
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ #
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ dataset_md.reload()
+ dataset_md_xml = dataset_md.xml
+ del dataset_md
+
+ # Parse the XML
+ parser = etree.XMLParser(encoding='UTF-8', remove_blank_text=True)
+ target_tree = etree.parse(StringIO(dataset_md_xml), parser=parser)
+ target_root = target_tree.getroot()
+ del parser, dataset_md_xml
+
+## target_root.xpath("./distInfo/distFormat/formatName")[0].set('Sync', "FALSE")
+## target_root.xpath("./dataIdInfo/envirDesc")[0].set('Sync', "TRUE")
+
+## for key in root_dict:
+## #print(key)
+## elem = target_root.find(f"./{key}")
+## if elem is not None and len(elem) > 0:
+## #print(elem.tag)
+## pass
+## del elem
+## del key
+##
+## # Root
+## mdTimeSt = target_root.find("./mdTimeSt")
+## #print(mdTimeSt)
+## if mdTimeSt is not None:
+## mdTimeSt.getparent().remove(mdTimeSt)
+## else:
+## pass
+## del mdTimeSt
+
+## # Metadata
+## target_root[:] = sorted(target_root, key=lambda x: root_dict[x.tag])
+## # Esri
+## Esri = target_root.xpath("./Esri")[0]
+## Esri[:] = sorted(Esri, key=lambda x: esri_dict[x.tag])
+## #print(etree.tostring(Esri, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del Esri
+## # dataIdInfo
+## dataIdInfo = target_root.xpath("./dataIdInfo")[0]
+## dataIdInfo[:] = sorted(dataIdInfo, key=lambda x: dataIdInfo_dict[x.tag])
+## #print(etree.tostring(dataIdInfo, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del dataIdInfo
+## # dqInfo
+## dqInfo = target_root.xpath("./dqInfo")[0]
+## dqInfo[:] = sorted(dqInfo, key=lambda x: dqInfo_dict[x.tag])
+## #print(etree.tostring(dqInfo, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del dqInfo
+## # distInfo
+## distInfo = target_root.xpath("./distInfo")[0]
+## distInfo[:] = sorted(distInfo, key=lambda x: distInfo_dict[x.tag])
+## #print(etree.tostring(distInfo, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del distInfo
+ # mdContact
+ # mdLang
+ # mdHrLv
+ # refSysInfo
+ # spatRepInfo
+ # spdoinfo
+ # eainfo
+
+## enttyp = target_root.find("enttyp")
+## if enttyp is not None:
+## enttypd = enttyp.find("enttypd")
+## enttypds = enttyp.find("enttypds")
+## if enttypd is None:
+## _xml = "A collection of geographic features with the same geometry type."
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## enttyp.insert(0, _root)
+## del _root, _xml
+## else:
+## pass
+## if enttypds is None:
+## _xml = "Esri"
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## enttyp.insert(0, _root)
+## del _root, _xml
+## else:
+## pass
+## del enttypds, enttypd
+## else:
+## pass
+## del enttyp
+
+ # idCredit completed
+ #for idCredit in target_root.xpath("./dataIdInfo/idCredit"):
+ # #print(etree.tostring(idCredit, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ # #idCredit.text = "NOAA Fisheries. 2025.."
+ # del idCredit
+ #print(etree.tostring(target_root.xpath("./dataIdInfo/idCredit")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+
+## # resTitle completed
+## resTitle = target_root.xpath("./dataIdInfo/idCitation/resTitle")[0]
+## target_root.xpath("./dqInfo/dataLineage/dataSource/srcCitatn/resTitle")[0].text = resTitle.text
+## #print(f"\tresTitle: {resTitle.text}")
+## #resTitle.text = f""
+## del resTitle
+## resAltTitle = target_root.xpath("./dataIdInfo/idCitation/resAltTitle")[0]
+## target_root.xpath("./dqInfo/dataLineage/dataSource/srcCitatn/resAltTitle")[0].text = resAltTitle.text
+## #print(f"\tresAltTitle: {resAltTitle.text}")
+## #resAltTitle.text = f""
+## del resAltTitle
+## collTitle = target_root.xpath("./dataIdInfo/idCitation/collTitle")[0]
+## #print(f"\tcollTitle: {collTitle.text}")
+## collTitle.text = "NMFS OST DisMAP"
+## target_root.xpath("./dqInfo/dataLineage/dataSource/srcCitatn/collTitle")[0].text = collTitle.text
+## #print(f"\tcollTitle: {collTitle.text}")
+## del collTitle
+
+## resConst = target_root.xpath("./dataIdInfo/resConst")
+## if len(resConst) == 1:
+## xml_file = b'''
+##
+##
+##
+##
+##
+##
+##
+## Data License: CC0-1.0
+##Data License URL: https://creativecommons.org/publicdomain/zero/1.0/
+##Data License Statement: These data were produced by NOAA and are not subject to copyright protection in the United States. NOAA waives any potential copyright and related rights in these data worldwide through the Creative Commons Zero 1.0 Universal Public Domain Dedication (CC0-1.0).
+##
+##
+##
+##
+##
+##
+## FISMA Low
+##
+##
+## <DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>***No Warranty*** The user assumes the entire risk related to its use of these data. NMFS is providing these data 'as is' and NMFS disclaims any and all warranties, whether express or implied, including (without limitation) any implied warranties of merchantability or fitness for a particular purpose. No warranty expressed or implied is made regarding the accuracy or utility of the data on any other system or for general or scientific purposes, nor shall the act of distribution constitute any such warranty. It is strongly recommended that careful attention be paid to the contents of the metadata file associated with these data to evaluate dataset limitations, restrictions or intended use. In no event will NMFS be liable to you or to any third party for any direct, indirect, incidental, consequential, special or exemplary damages or lost profit resulting from any use or misuse of these data.</SPAN></P></DIV></DIV></DIV>
+##
+## '''
+## _tree = etree.parse(BytesIO(xml_file), etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## _root = _tree.getroot()
+## resConst[0].getparent().replace(resConst[0], _root)
+## del _root, _tree, xml_file
+## else:
+## pass
+## del resConst
+
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ # discKeys, themeKeys, placeKeys, tempKeys
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ #searchKeys = target_root.xpath("./dataIdInfo/searchKeys")
+ #for searchKey in searchKeys:
+ # #print(etree.tostring(searchKey, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ # del searchKey
+ #del searchKeys
+## searchKeys = target_root.xpath("./dataIdInfo/searchKeys")
+## for searchKey in searchKeys:
+## #print(etree.tostring(searchKey, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## for keyword in searchKey.xpath("./keyword"):
+## if isinstance(keyword.text, type(None)):
+## keyword.getparent().remove(keyword)
+## else:
+## pass #print(etree.tostring(keyword, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del searchKey
+## del searchKeys
+## target_root.xpath("./dataIdInfo/searchKeys/keyword")[0].text = f"{species_range_dict[dataset_name]['LISTENTITY']}; ESA; range; NMFS"
+##
+## keywords = target_root.xpath("./dataIdInfo/discKeys/keyword")
+## if keywords is not None and len(keywords) and len(keywords[0]) == 0:
+## keyword = target_root.xpath("./dataIdInfo/discKeys/keyword")[0]
+## keyword.text = f"{species_range_dict[dataset_name]['SCIENAME']}"
+## del keyword
+## #elif keywords is not None and len(keywords) and len(keywords[0]) >= 1:
+## # pass
+## del keywords
+## createDate = target_root.xpath("./dataIdInfo/discKeys/thesaName/date/createDate")
+## if createDate is not None and len(createDate) and len(createDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/discKeys/thesaName/date/createDate")[0].text = CreaDateTime
+## elif createDate is not None and len(createDate) and len(createDate[0]) == 1:
+## pass
+## else:
+## pass
+## del createDate
+## pubDate = target_root.xpath("./dataIdInfo/discKeys/thesaName/date/pubDate")
+## if pubDate is not None and len(pubDate) and len(pubDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/discKeys/thesaName/date/pubDate")[0].text = CreaDateTime
+## elif pubDate is not None and len(pubDate) and len(pubDate[0]) == 1:
+## pass
+## else:
+## pass
+## del pubDate
+## reviseDate = target_root.xpath("./dataIdInfo/discKeys/thesaName/date/reviseDate")
+## if reviseDate is not None and len(reviseDate) and len(reviseDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/discKeys/thesaName/date/reviseDate")[0].text = ModDateTime
+## elif reviseDate is not None and len(reviseDate) and len(reviseDate[0]) == 1:
+## pass
+## else:
+## pass
+## del reviseDate
+## resTitle = target_root.xpath("./dataIdInfo/discKeys/thesaName/resTitle")
+## if resTitle is not None and len(resTitle) and len(resTitle[0]) == 0:
+## target_root.xpath("./dataIdInfo/discKeys/thesaName/resTitle")[0].text = "Integrated Taxonomic Information System (ITIS)"
+## elif resTitle is not None and len(resTitle) and len(resTitle[0]) == 1:
+## pass
+## else:
+## pass
+## del resTitle
+## discKeys = target_root.xpath("./dataIdInfo/discKeys")
+## for i in range(0, len(discKeys)):
+## #print(etree.tostring(discKeys[i], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del i
+## del discKeys
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # themeKeys
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## keywords = target_root.xpath("./dataIdInfo/themeKeys/keyword")
+## if keywords is not None and len(keywords) and len(keywords[0]) == 0:
+## new_item_name = target_root.find("./Esri/DataProperties/itemProps/itemName").text
+## keyword = target_root.xpath("./dataIdInfo/themeKeys/keyword")[0]
+## keyword.text = f"{species_range_dict[dataset_name]['COMNAME'].title()}; {species_range_dict[dataset_name]['SCIENAME']}; Endangered Species; NMFS"
+## del keyword, new_item_name
+## elif keywords is not None and len(keywords) and len(keywords[0]) >= 1:
+## pass
+## del keywords
+## createDate = target_root.xpath("./dataIdInfo/themeKeys/thesaName/date/createDate")
+## if createDate is not None and len(createDate) and len(createDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/themeKeys/thesaName/date/createDate")[0].text = CreaDateTime
+## elif createDate is not None and len(createDate) and len(createDate[0]) == 1:
+## pass
+## else:
+## pass
+## del createDate
+## pubDate = target_root.xpath("./dataIdInfo/themeKeys/thesaName/date/pubDate")
+## if pubDate is not None and len(pubDate) and len(pubDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/themeKeys/thesaName/date/pubDate")[0].text = CreaDateTime
+## elif pubDate is not None and len(pubDate) and len(pubDate[0]) == 1:
+## pass
+## else:
+## pass
+## del pubDate
+## reviseDate = target_root.xpath("./dataIdInfo/themeKeys/thesaName/date/reviseDate")
+## if reviseDate is not None and len(reviseDate) and len(reviseDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/themeKeys/thesaName/date/reviseDate")[0].text = ModDateTime
+## elif reviseDate is not None and len(reviseDate) and len(reviseDate[0]) == 1:
+## pass
+## else:
+## pass
+## del reviseDate
+## resTitle = target_root.xpath("./dataIdInfo/themeKeys/thesaName/resTitle")
+## if resTitle is not None and len(resTitle) and len(resTitle[0]) == 0:
+## target_root.xpath("./dataIdInfo/themeKeys/thesaName/resTitle")[0].text = "Global Change Master Directory (GCMD) Science Keyword"
+## elif resTitle is not None and len(resTitle) and len(resTitle[0]) == 1:
+## pass
+## else:
+## pass
+## del resTitle
+## themeKeys = target_root.xpath("./dataIdInfo/themeKeys")
+## for i in range(0, len(themeKeys)):
+## #print(etree.tostring(themeKeys[i], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del i
+## del themeKeys
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # placeKeys
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## keywords = target_root.xpath("./dataIdInfo/placeKeys/keyword")
+## if keywords is not None and len(keywords) and len(keywords[0]) == 0:
+## new_item_name = target_root.find("./Esri/DataProperties/itemProps/itemName").text
+## keyword = target_root.xpath("./dataIdInfo/placeKeys/keyword")[0]
+## keyword.text = f"Enter place/geography keywords for {new_item_name}, separated by a semicolon"
+## del keyword, new_item_name
+## elif keywords is not None and len(keywords) and len(keywords[0]) >= 1:
+## pass
+## del keywords
+## createDate = target_root.xpath("./dataIdInfo/placeKeys/thesaName/date/createDate")
+## if createDate is not None and len(createDate) and len(createDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/placeKeys/thesaName/date/createDate")[0].text = CreaDateTime
+## elif createDate is not None and len(createDate) and len(createDate[0]) == 1:
+## pass
+## else:
+## pass
+## del createDate
+## pubDate = target_root.xpath("./dataIdInfo/placeKeys/thesaName/date/pubDate")
+## if pubDate is not None and len(pubDate) and len(pubDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/placeKeys/thesaName/date/pubDate")[0].text = CreaDateTime
+## elif pubDate is not None and len(pubDate) and len(pubDate[0]) == 1:
+## pass
+## else:
+## pass
+## del pubDate
+## reviseDate = target_root.xpath("./dataIdInfo/placeKeys/thesaName/date/reviseDate")
+## if reviseDate is not None and len(reviseDate) and len(reviseDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/placeKeys/thesaName/date/reviseDate")[0].text = ModDateTime
+## elif reviseDate is not None and len(reviseDate) and len(reviseDate[0]) == 1:
+## pass
+## else:
+## pass
+## del reviseDate
+## resTitle = target_root.xpath("./dataIdInfo/placeKeys/thesaName/resTitle")
+## if resTitle is not None and len(resTitle) and len(resTitle[0]) == 0:
+## target_root.xpath("./dataIdInfo/placeKeys/thesaName/resTitle")[0].text = "Global Change Master Directory (GCMD) Location Keywords"
+## elif resTitle is not None and len(resTitle) and len(resTitle[0]) == 1:
+## pass
+## else:
+## pass
+## del resTitle
+## placeKeys = target_root.xpath("./dataIdInfo/placeKeys")
+## for i in range(0, len(placeKeys)):
+## #print(etree.tostring(placeKeys[i], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del i
+## del placeKeys
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # tempKeys
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## keywords = target_root.xpath("./dataIdInfo/tempKeys/keyword")
+## if keywords is not None and len(keywords) and len(keywords[0]) == 0:
+## new_item_name = target_root.find("./Esri/DataProperties/itemProps/itemName").text
+## keyword = target_root.xpath("./dataIdInfo/tempKeys/keyword")[0]
+## keyword.text = f"Enter temporal keywords (e.g. year, year range, season, etc.) for {new_item_name}, separated by a semicolon"
+## del keyword, new_item_name
+## elif keywords is not None and len(keywords) and len(keywords[0]) >= 1:
+## pass
+## del keywords
+## createDate = target_root.xpath("./dataIdInfo/tempKeys/thesaName/date/createDate")
+## if createDate is not None and len(createDate) and len(createDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/tempKeys/thesaName/date/createDate")[0].text = CreaDateTime
+## elif createDate is not None and len(createDate) and len(createDate[0]) == 1:
+## pass
+## else:
+## pass
+## del createDate
+## pubDate = target_root.xpath("./dataIdInfo/tempKeys/thesaName/date/pubDate")
+## if pubDate is not None and len(pubDate) and len(pubDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/tempKeys/thesaName/date/pubDate")[0].text = CreaDateTime
+## elif pubDate is not None and len(pubDate) and len(pubDate[0]) == 1:
+## pass
+## else:
+## pass
+## del pubDate
+## reviseDate = target_root.xpath("./dataIdInfo/tempKeys/thesaName/date/reviseDate")
+## if reviseDate is not None and len(reviseDate) and len(reviseDate[0]) == 0:
+## target_root.xpath("./dataIdInfo/tempKeys/thesaName/date/reviseDate")[0].text = ModDateTime
+## elif reviseDate is not None and len(reviseDate) and len(reviseDate[0]) == 1:
+## pass
+## else:
+## pass
+## del reviseDate
+## resTitle = target_root.xpath("./dataIdInfo/tempKeys/thesaName/resTitle")
+## if resTitle is not None and len(resTitle) and len(resTitle[0]) == 0:
+## target_root.xpath("./dataIdInfo/tempKeys/thesaName/resTitle")[0].text = "Global Change Master Directory (GCMD) Temporal Data Resolution Keywords"
+## elif resTitle is not None and len(resTitle) and len(resTitle[0]) == 1:
+## pass
+## else:
+## pass
+## del resTitle
+## tempKeys = target_root.xpath("./dataIdInfo/tempKeys")
+## for i in range(0, len(tempKeys)):
+## #print(etree.tostring(tempKeys[i], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del i
+## del tempKeys
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # Data Extent
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## dataExt = target_root.xpath("./dataIdInfo/dataExt")[0]
+## exDesc = dataExt.xpath("//exDesc")
+## if len(exDesc) == 0:
+## _xml = "[Location extent description]. The data represents an approximate distribution of the listed entity based on the best available information from [date of first source] to [date of final species expert review]."
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## dataExt.insert(0, _root)
+## del _root, _xml
+## elif len(exDesc) == 1:
+## exDesc[0].text = "[Location extent description]. The data represents an approximate distribution of the listed entity based on the best available information from [date of first source] to [date of final species expert review]."
+## else:
+## pass
+## del exDesc
+## tempEle = dataExt.xpath("//tempEle")
+## if len(tempEle) == 0:
+## _xml = f' \
+## {CreaDateTime}{ModDateTime} \
+## {ModDateTime}'
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## dataExt.insert(2, _root)
+## del _root, _xml
+## elif len(tempEle) == 1:
+## _xml = f' \
+## {CreaDateTime}{ModDateTime} \
+## {ModDateTime}'
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## tempEle[0].getparent().replace(tempEle[0], _root)
+## del _root, _xml
+## del tempEle
+## del dataExt
+ #dataExt = target_root.xpath("./dataIdInfo/dataExt")
+ #for i in range(0, len(dataExt)):
+ # #print(etree.tostring(dataExt[i], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ # del i
+ #del dataExt
+ #dataExt = target_root.xpath("./dataIdInfo/dataExt")
+ #for i in range(1, len(dataExt)):
+ # dataExt[i].getparent().remove(dataExt[i])
+ # del i
+ #del dataExt
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## #target_root.xpath("./dqInfo/dqScope/scpLvl/ScopeCd")[0].set('value', "005")
+## PresFormCd = target_root.xpath("./dataIdInfo/idCitation/presForm/PresFormCd")[0]
+## fgdcGeoform = target_root.xpath("./dataIdInfo/idCitation/presForm/fgdcGeoform")[0]
+## SpatRepTypCd = target_root.xpath("./dataIdInfo/spatRpType/SpatRepTypCd")[0]
+## PresFormCd.set('Sync', "TRUE")
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # SpatRepTypCd "Empty" "001" (vector) "002" (raster/grid) "003" (tabular)
+## # PresFormCd "005" "003" "011"
+## # fgdcGeoform "vector data" "raster data" "tabular data"
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## datasetSet = target_root.xpath("./dqInfo/dqScope/scpLvlDesc/datasetSet")[0]
+## if SpatRepTypCd.get("value") == "001":
+## PresFormCd.set("value", "005")
+## fgdcGeoform.text = "vector digital data"
+## datasetSet.text = "Vector Digital Data"
+## elif SpatRepTypCd.get("value") == "002":
+## PresFormCd.set("value", "003")
+## fgdcGeoform.text = "raster digital data"
+## datasetSet.text = "Raster Digital Data"
+## elif SpatRepTypCd.get("value") == "003":
+## PresFormCd.set("value", "011")
+## fgdcGeoform.text = "tabular digital data"
+## datasetSet.text = "Tabular Digital Data"
+## else:
+## pass
+## #print("------" * 10)
+## #print(etree.tostring(SpatRepTypCd, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## #print(etree.tostring(target_root.xpath("./dataIdInfo/idCitation/presForm")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## #print(etree.tostring(target_root.xpath("./dqInfo/dqScope")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## #print("------" * 10)
+## del datasetSet, SpatRepTypCd, fgdcGeoform, PresFormCd
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## formatName = target_root.xpath("./distInfo/distFormat/formatName")[0]
+## envirDesc = target_root.xpath("./dataIdInfo/envirDesc")[0]
+## envirDesc.set('Sync', "TRUE")
+## target_root.xpath("./distInfo/distFormat/fileDecmTech")[0].text = "Uncompressed"
+## # # 001 = Vector
+## #''' # 002 = Grid
+## # # 003 = Text Table
+## #'''
+## #format_name_text = ""
+## #try:
+## # GeoObjTypCd = target_root.xpath("./spatRepInfo/VectSpatRep/geometObjs/geoObjTyp/GeoObjTypCd")[0].get("value")
+## # if GeoObjTypCd == "002":
+## # format_name_text = "ESRI File Geodatabase"
+## # del GeoObjTypCd
+## #except:
+## # format_name_text = "ESRI Geodatabase Table"
+## formatName.text = "ESRI REST Service"
+## formatVer_text = str.rstrip(str.lstrip(envirDesc.text))
+## formatVer = target_root.xpath("./distInfo/distFormat/formatVer")[0]
+## formatVer.text = str.rstrip(str.lstrip(formatVer_text))
+## del formatVer_text
+## del envirDesc
+## del formatVer
+## del formatName
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## mdFileID = target_root.xpath(f"//mdFileID")
+## if mdFileID is not None and len(mdFileID) == 0:
+## _xml = 'gov.noaa.nmfs.inport:'
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## target_root.insert(root_dict['mdFileID'], _root)
+## del _root, _xml
+## elif mdFileID is not None and len(mdFileID) and len(mdFileID[0]) == 0:
+## mdFileID[0].text = "gov.noaa.nmfs.inport:"
+## elif mdFileID is not None and len(mdFileID) and len(mdFileID[0]) == 1:
+## pass
+## #print(etree.tostring(mdFileID[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del mdFileID
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## mdMaint = target_root.xpath(f"//mdMaint")
+## if mdMaint is not None and len(mdMaint) == 0:
+## _xml = ''
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## target_root.insert(root_dict['mdMaint'], _root)
+## del _root, _xml
+## elif mdMaint is not None and len(mdMaint) and len(mdMaint[0]) == 0:
+## target_root.xpath("./mdMaint/maintFreq/MaintFreqCd")[0].attrib["value"] = "009"
+## elif mdMaint is not None and len(mdMaint) and len(mdMaint[0]) == 1:
+## pass #print(etree.tostring(mdMaint[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## else:
+## pass
+## #print(etree.tostring(mdMaint[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del mdMaint
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## distorTran = target_root.xpath("//distorTran")
+## for _distorTran in distorTran:
+## _distorTran.tag = "distTranOps"
+## del _distorTran
+## del distorTran
+
+## distTranOps = target_root.xpath("//distTranOps")
+## for i in range(0, len(distTranOps)):
+## if i == 0:
+## xml_file = b'''
+## MB
+## 0
+##
+## https://services2.arcgis.com/C8EMgrsFcRFL6LrL/arcgis/rest/services/.../FeatureServer
+## ESRI REST Service
+## NMFS Office of Science and Technology
+## Dataset Feature Service
+##
+##
+##
+##
+## '''
+## _tree = etree.parse(BytesIO(xml_file), etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## _root = _tree.getroot()
+## distTranOps[i].getparent().replace(distTranOps[i], _root)
+## del _root, _tree, xml_file
+## elif i > 0:
+## distTranOps[i].getparent().remove(distTranOps[i])
+## else:
+## pass
+## del i
+## del distTranOps
+ #print(etree.tostring(target_root.xpath("./distInfo")[0], encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ #print(etree.tostring(target_root.xpath("./Esri/DataProperties/itemProps")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## new_item_name = target_root.find("./Esri/DataProperties/itemProps/itemName").text
+## new_item_name = new_item_name.replace("IDW_Sample_Locations", "Sample_Locations") if "Sample_Locations" in new_item_name else new_item_name
+## onLineSrcs = target_root.findall("./distInfo/distTranOps/onLineSrc")
+## for onLineSrc in onLineSrcs:
+## if onLineSrc.find('./protocol').text == "ESRI REST Service":
+## old_linkage_element = onLineSrc.find('./linkage')
+## old_linkage = old_linkage_element.text
+## #print(old_linkage, flush=True)
+## old_item_name = old_linkage[old_linkage.find("/services/")+len("/services/"):old_linkage.find("/FeatureServer")]
+## new_linkage = old_linkage.replace(old_item_name, f"{new_item_name}_{date_code(project)}")
+## #print(new_linkage, flush=True)
+## old_linkage_element.text = new_linkage
+## #print(old_linkage_element.text, flush=True)
+## del old_linkage_element
+## del old_item_name, old_linkage, new_linkage
+## else:
+## pass
+## del onLineSrc
+## del onLineSrcs, new_item_name
+## #print(etree.tostring(target_root.xpath("./distInfo")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+
+## xml_file = b'''
+##
+## external
+## 579ce2e21b888ac8f6ac1dac30f04cddec7a0d7c
+## NMFS Office of Science and Technology
+## NMFS Office of Science and Technology
+## GIS App Developer
+##
+##
+## 1315 East West Highway
+## Silver Spring
+## MD
+## 20910-3282
+## US
+## tim.haverland@noaa.gov
+##
+##
+## 301-427-8137
+## 301-713-4137
+##
+## 0700 - 1800 EST/EDT
+##
+## https://www.fisheries.noaa.gov/about/office-science-and-technology
+## REST Service
+## NMFS Office of Science and Technology
+## NOAA Fisheries Office of Science and Technology
+##
+##
+##
+##
+##
+## NMFS Office of Science and Technology (Distributor)
+## True
+##
+##
+##
+##
+##
+## '''
+## _tree = etree.parse(BytesIO(xml_file), etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## _root = _tree.getroot()
+## distributor = target_root.xpath(f"./distInfo/distributor")[0]
+## distributor.getparent().replace(distributor, _root)
+## del _root, _tree, xml_file
+## #print(f"\n\t{etree.tostring(target_root.xpath(f'./distInfo/distributor')[0], encoding='UTF-8', method='xml', pretty_print=True).decode()}\n")
+## del distributor
+
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ # statement
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## statement = target_root.xpath("./dqInfo/dataLineage/statement")
+## if statement is not None and len(statement) == 0:
+## pass # Need to insert statement
+## elif statement is not None and len(statement) and len(statement[0]) == 0:
+## target_root.xpath("./dqInfo/dataLineage/statement")[0].text = "Need to update datalienage statement"
+## elif statement is not None and len(statement) and len(statement[0]) == 1:
+## pass
+## elif statement is not None and len(statement) and len(statement[0]) >= 1:
+## pass
+## else:
+## pass
+## #print(f"\n\t{etree.tostring(statement[0], encoding='UTF-8', method='xml', pretty_print=True).decode()}\n")
+## del statement
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # srcDesc
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## srcDesc = target_root.xpath("./dqInfo/dataLineage/dataSource/srcDesc")
+## if srcDesc is not None and len(srcDesc) == 0:
+## pass # Need to insert srcDesc
+## elif srcDesc is not None and len(srcDesc) and len(srcDesc[0]) == 0:
+## target_root.xpath("./dqInfo/dataLineage/dataSource/srcDesc")[0].text = "Need to update srcDesc"
+## elif srcDesc is not None and len(srcDesc) and len(srcDesc[0]) == 1:
+## pass
+## elif srcDesc is not None and len(srcDesc) and len(srcDesc[0]) >= 1:
+## pass
+## else:
+## pass
+## #print(f"\n\t{etree.tostring(srcDesc[0], encoding='UTF-8', method='xml', pretty_print=True).decode()}\n")
+## del srcDesc
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ # prcStep
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## stepProcs = target_root.xpath("./dqInfo/dataLineage/prcStep/stepProc")
+## for stepProc in stepProcs:
+## rpIndName = stepProc.find("rpIndName")
+## if rpIndName is None:
+## stepProc.getparent().remove(stepProc)
+## else:
+## pass
+## del rpIndName
+## #print(f"{etree.tostring(stepProc, encoding='UTF-8', method='xml', pretty_print=True).decode()}")
+## del stepProc
+## del stepProcs
+
+## _report = target_root.xpath(f"./dqInfo/report[@type='DQConcConsis']")
+## if len(_report) == 1:
+## _xml = '''
+## Based on a review from DisMAP Team all necessary features are present.
+##
+##
+##
+## Conceptual Consistency Report
+##
+## NMFS OST DisMAP
+##
+##
+##
+##
+##
+##
+## Based on a review from DisMAP Team all necessary features are present.
+## 1
+##
+##
+## '''
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## #print(f"{etree.tostring(_root, encoding='UTF-8', method='xml', pretty_print=True).decode()}")
+## #raise SystemExit
+## _report[0].getparent().replace(_report[0], _root)
+## del _root, _xml
+## else:
+## pass
+## del _report
+##
+## _report = target_root.xpath(f"./dqInfo/report[@type='DQCompOm']")
+## if len(_report) == 1:
+## _xml = '''
+## Based on a review from DisMAP Team all necessary features are present.
+##
+##
+##
+## Completeness Report
+##
+## NMFS OST DisMAP
+##
+##
+##
+##
+##
+##
+## Based on a review from DisMAP Team all necessary features are present.
+## 1
+##
+##
+## '''
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## _report[0].getparent().replace(_report[0], _root)
+## del _root, _xml
+## else:
+## pass
+## del _report
+
+
+## prcStep = target_root.xpath("./dqInfo/dataLineage/prcStep")
+## #print(len(prcStep))
+## if prcStep is not None and len(prcStep) == 0:
+## pass
+## #print("prcStep missing")
+## elif prcStep is not None and len(prcStep) and len(prcStep[0]) == 0:
+## #print("found empty element prcStep. Now adding content.")
+## target_root.xpath("./dqInfo/dataLineage/prcStep")[0].text = "Update Metadata 2025"
+## elif prcStep is not None and len(prcStep) and len(prcStep[0]) >= 1:
+## for i in range(0, len(prcStep)):
+## stepDesc = prcStep[i].xpath("./stepDesc")[0]
+## if stepDesc.text == "pre-Update Metadata 2025":
+## prcStep[i].xpath("./stepDateTm")[0].text = CreaDateTime
+## elif stepDesc.text == "Update Metadata 2025":
+## prcStep[i].xpath("./stepDateTm")[0].text = ModDateTime
+## elif stepDesc.text not in ["pre-Update Metadata 2025", "Update Metadata 2025"]:
+## prcStep[i].xpath("./stepDateTm")[0].text = CreaDateTime
+## del stepDesc
+## del i
+## else:
+## pass
+## del prcStep
+
+ #srcDesc = target_root.xpath("./dqInfo/dataLineage/dataSource/srcDesc")
+ #for _srcDesc in srcDesc:
+ # #print(f"\t{etree.tostring(_srcDesc, encoding='UTF-8', method='xml', pretty_print=True).decode()}")
+ # del _srcDesc
+ #del srcDesc
+ # print(etree.tostring(reports[0].getparent(), encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #del dataSources
+ #dataLineage = target_root.xpath("./dqInfo/dataLineage")
+ #for i in range(0, len(dataLineage)):
+ # #print(etree.tostring(dataLineage[i], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ # del i
+ #del dataLineage
+ #distInfo = target_root.xpath("./distInfo")[0]
+ #print(etree.tostring(distInfo, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ #del distInfo
+
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # Reorder Elements
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # Metadata
+## target_root[:] = sorted(target_root, key=lambda x: root_dict[x.tag])
+## # Esri
+## Esri = target_root.xpath("./Esri")[0]
+## Esri[:] = sorted(Esri, key=lambda x: esri_dict[x.tag])
+## #print(etree.tostring(Esri, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del Esri
+## # dataIdInfo
+## dataIdInfo = target_root.xpath("./dataIdInfo")[0]
+## dataIdInfo[:] = sorted(dataIdInfo, key=lambda x: dataIdInfo_dict[x.tag])
+## #print(etree.tostring(dataIdInfo, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del dataIdInfo
+## # dqInfo
+## dqInfo = target_root.xpath("./dqInfo")[0]
+## dqInfo[:] = sorted(dqInfo, key=lambda x: dqInfo_dict[x.tag])
+## #print(etree.tostring(dqInfo, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del dqInfo
+## # distInfo
+## distInfo = target_root.xpath("./distInfo")[0]
+## distInfo[:] = sorted(distInfo, key=lambda x: distInfo_dict[x.tag])
+## #print(etree.tostring(distInfo, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## del distInfo
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ #print(etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ etree.indent(target_tree, space=' ')
+ dataset_md_xml = etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+
+ SaveBackXml = False
+ if SaveBackXml:
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.xml = dataset_md_xml
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ #_target_tree = etree.parse(StringIO(dataset_md.xml), parser=etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ #_target_tree.write(rf"{export_folder}\{dataset_name}.xml", pretty_print=True)
+ #print(etree.tostring(_target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ #del _target_tree
+ del dataset_md
+ else:
+ pass
+ del SaveBackXml
+ del dataset_md_xml
+
+ # Declared Variables
+ del project
+ del contacts_xml_tree
+ del source_target_merge
+ del export_folder
+ del mdContact_rpIndName, mdContact_eMailAdd, mdContact_role
+ del citRespParty_rpIndName, citRespParty_eMailAdd, citRespParty_role
+ del idPoC_rpIndName, idPoC_eMailAdd, idPoC_role
+ del distorCont_rpIndName, distorCont_eMailAdd, distorCont_role
+ del srcCitatn_rpIndName, srcCitatn_eMailAdd, srcCitatn_role
+ del dataset_name
+ del CreaDateTime, ModDateTime
+ del contact_dict
+ #del RoleCd_dict, tpCat_dict,
+ del dataIdInfo_dict, dqInfo_dict, distInfo_dict, esri_dict, root_dict
+ # Imports
+ del etree, md, BytesIO, StringIO, copy
+ # Declared variables
+ del target_root, target_tree
+ # Function Parameters
+ del dataset_path
+
+ except KeyboardInterrupt:
+ raise SystemExit
+ except SystemExit:
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def add_update_dates(dataset_path=""):
+ try:
+ # Imports
+ from lxml import etree
+ from io import StringIO, BytesIO
+ import copy
+ from arcpy import metadata as md
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ #print(dataset_path)
+ dataset_name = os.path.basename(dataset_path)
+ print(f"Processing Add/Update Dates for dataset: '{dataset_name}'")
+ #print(f"\tDataset Location: {os.path.basename(os.path.dirname(dataset_path))}")
+
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ dataset_md.reload()
+ dataset_md_xml = dataset_md.xml
+ del dataset_md
+
+ # Parse the XML
+ parser = etree.XMLParser(encoding='UTF-8', remove_blank_text=True)
+ target_tree = etree.parse(StringIO(dataset_md_xml), parser=parser)
+ target_root = target_tree.getroot()
+ del parser, dataset_md_xml
+
+ CreaDate = target_root.xpath(f"//Esri/CreaDate")[0].text
+ CreaTime = target_root.xpath(f"//Esri/CreaTime")[0].text
+ #print(CreaDate, CreaTime)
+ CreaDateTime = f"{CreaDate[:4]}-{CreaDate[4:6]}-{CreaDate[6:]}T{CreaTime[:2]}:{CreaTime[2:4]}:{CreaTime[4:6]}"
+ #print(f"\tCreaDateTime: {CreaDateTime}")
+ #del CreaDateTime
+ del CreaDate, CreaTime
+ ModDate = target_root.xpath(f"//Esri/ModDate")[0].text
+ ModTime = target_root.xpath(f"//Esri/ModTime")[0].text
+ #print(ModDate, ModTime)
+ ModDateTime = f"{ModDate[:4]}-{ModDate[4:6]}-{ModDate[6:]}T{ModTime[:2]}:{ModTime[2:4]}:{ModTime[4:6]}"
+ #print(f"\tModDateTime: {ModDateTime}")
+ #del ModDateTime
+ del ModDate, ModTime
+
+ dates = target_tree.xpath(f"//date")
+ count=0
+ count_dates = len(dates)
+ for date in dates:
+ #_date = copy.deepcopy(date)
+ count+=1
+
+ createDate = date.xpath(f"./createDate")
+ #print(f"Element list: '{createDate}'")
+ #print(f"Element count: '{len(createDate)}'")
+ #print(len(createDate[0].text))
+ #print(type(createDate[0].text))
+ if not len(createDate):
+ _xml = f"{CreaDateTime}"
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ date.insert(0, _root)
+ del _root, _xml
+ elif len(createDate) and createDate[0].text is not None:
+ pass
+ #print(f"createDate exists and has content '{createDate[0].text}'")
+ #print(etree.tostring(createDate[0], encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ elif len(createDate) and createDate[0].text is None:
+ #print(f"createDate exists and but does not have content.")
+ createDate[0].text = CreaDateTime
+ date.insert(0, createDate[0])
+ del createDate
+
+ pubDate = date.xpath(f"./pubDate")
+ if not len(pubDate):
+ _xml = f"{CreaDateTime}"
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ date.insert(0, _root)
+ del _root, _xml
+ if len(pubDate) and pubDate[0].text is not None:
+ pass
+ #print(f"pubDate exists and has content '{pubDate[0].text}'")
+ #print(etree.tostring(pubDate[0], encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ elif len(pubDate) and pubDate[0].text is None:
+ #print(f"pubDate exists and but does not have content.")
+ pubDate[0].text = CreaDateTime
+ date.insert(1, pubDate[0])
+ del pubDate
+
+ reviseDate = date.xpath(f"./reviseDate")
+ if not len(reviseDate):
+ _xml = f"{ModDateTime}"
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ date.insert(0, _root)
+ del _root, _xml
+ if len(reviseDate) and reviseDate[0].text is not None:
+ pass
+ #print(f"reviseDate exists and has content '{reviseDate[0].text}'")
+ #print(etree.tostring(reviseDate[0], encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ elif len(reviseDate) and reviseDate[0].text is None:
+ #print(f"reviseDate exists and but does not have content.")
+ reviseDate[0].text = ModDateTime
+ date.insert(2, reviseDate[0])
+ del reviseDate
+
+## if len(createDate) == 0:
+## _xml = f"{CreaDateTime}"
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## date.insert(0, _root)
+## del _root, _xml
+## elif len(createDate) == 1:
+## if createDate[0].text:
+## createDate[0].text = createDate[0].text
+## elif not createDate[0].text:
+## createDate[0].text = CreaDateTime
+## else:
+## pass
+## else:
+## pass
+ #print(etree.tostring(createDate[0], encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+## pubDate = date.xpath(f"./date/pubDate")
+## if len(pubDate) == 0:
+## _xml = f"{CreaDateTime}"
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## date.insert(0, _root)
+## del _root, _xml
+## elif len(pubDate) == 1:
+## if pubDate[0].text:
+## pubDate[0].text = pubDate[0].text
+## elif not pubDate[0].text:
+## pubDate[0].text = CreaDateTime
+## else:
+## pass
+## else:
+## pass
+## del pubDate
+##
+## try:
+## revisedDate = date.xpath(f"./date/revisedDate")[0]
+## revisedDate.tag = "reviseDate"
+## del revisedDate
+## except:
+## pass
+##
+## reviseDate = date.xpath(f"./date/reviseDate")
+## if len(reviseDate) == 0:
+## _xml = f"{CreaDateTime}"
+## _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+## date.insert(0, _root)
+## del _root, _xml
+## elif len(reviseDate) == 1:
+## if reviseDate[0].text:
+## reviseDate[0].text = reviseDate[0].text
+## elif not reviseDate[0].text:
+## reviseDate[0].text = ModDateTime
+## else:
+## pass
+## else:
+## pass
+## del reviseDate
+
+## date.getparent().replace(date, _date)
+
+ #print(etree.tostring(date, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+
+ del date
+ del count, count_dates
+ del dates
+
+ dates = target_root.xpath(f"//date")
+ count=0
+ count_dates = len(dates)
+ for date in dates:
+ count+=1
+ #print(f"\tDate: {count} of {count_dates}")
+ #print(f"\t\tCreaDateTime: {CreaDateTime}")
+ #print(f"\t\tModDateTime: {ModDateTime}")
+ #print(date.getroottree().getpath(date))
+ #print(etree.tostring(date, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ del date
+ del count, count_dates
+ del dates
+
+ # No changes needed below
+ #print(etree.tostring(target_tree, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ etree.indent(target_root, space=' ')
+ dataset_md_xml = etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+
+ SaveBackXml = True
+ if SaveBackXml:
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.xml = dataset_md_xml
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ #dataset_md.reload()
+ del dataset_md
+ else:
+ pass
+ del SaveBackXml
+ del dataset_md_xml
+
+ del dataset_name, target_tree, target_root
+
+ # Declared Variables
+ del CreaDateTime, ModDateTime
+ # Imports
+ del etree, StringIO, BytesIO, copy, md
+ # Function Parameters
+ del dataset_path
+
+ except KeyboardInterrupt:
+ raise SystemExit
+ except Exception:
+ traceback.print_exc()
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def basic_metadata_report(dataset_path=""):
+ try:
+ # Imports
+ from lxml import etree
+ from io import StringIO, BytesIO
+ import copy
+ from arcpy import metadata as md
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ project_gdb = os.path.dirname(dataset_path)
+ project_folder = os.path.dirname(project_gdb)
+ project = os.path.basename(os.path.dirname(project_gdb))
+ export_folder = rf"{project_folder}\Export"
+ scratch_folder = rf"{project_folder}\Scratch"
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+ del scratch_folder
+ del project_folder
+ del project_gdb
+
+ #print(dataset_path)
+ dataset_name = os.path.basename(dataset_path)
+ print(f"Reporting on Basic XML Metadata for dataset: '{dataset_name}'")
+ #print(f"\tDataset Location: {os.path.basename(os.path.dirname(dataset_path))}")
+
+ dataset_md = md.Metadata(dataset_path)
+ #dataset_md.synchronize("ALWAYS")
+ #dataset_md.save()
+ #dataset_md.reload()
+ dataset_md_xml = dataset_md.xml
+ del dataset_md
+
+ # Parse the XML
+ parser = etree.XMLParser(encoding='UTF-8', remove_blank_text=True)
+ target_tree = etree.parse(StringIO(dataset_md_xml), parser=parser)
+ target_root = target_tree.getroot()
+ del parser, dataset_md_xml
+
+ dataset_md = md.Metadata(dataset_path)
+ print(f"\tTitle: {dataset_md.title}")
+ print(f"\tSearch Keys: {dataset_md.tags}")
+ print(f"\tSummary: {dataset_md.summary}")
+ print(f"\tDescription: {dataset_md.description}")
+ print(f"\tCredits: {dataset_md.credits}")
+ print(f"\tUse Limits: {dataset_md.accessConstraints}")
+ del dataset_md
+
+ # No changes needed below
+ #print(etree.tostring(target_tree, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ etree.indent(target_root, space=' ')
+ dataset_md_xml = etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+
+ SaveBackXml = True
+ if SaveBackXml:
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.xml = dataset_md_xml
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ #dataset_md.reload()
+ del dataset_md
+ else:
+ pass
+ del SaveBackXml
+ del dataset_md_xml
+
+ # Declared Varaiables
+ del project, export_folder
+ del dataset_name
+ del target_tree, target_root
+ # Imports
+ del etree, StringIO, BytesIO, copy, md
+ # Function Parameters
+ del dataset_path
+ except KeyboardInterrupt:
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def metadata_esri_report(dataset_path=""):
+ try:
+ # Imports
+ from lxml import etree
+ from io import StringIO, BytesIO
+ import copy
+ from arcpy import metadata as md
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ #print(dataset_path)
+ dataset_name = os.path.basename(dataset_path)
+ print(f"Reporting on Esri XML for dataset: '{dataset_name}'")
+ #print(f"\tDataset Location: {os.path.basename(os.path.dirname(dataset_path))}")
+
+ dataset_md = md.Metadata(dataset_path)
+ #dataset_md.synchronize("ALWAYS")
+ #dataset_md.save()
+ #dataset_md.reload()
+ dataset_md_xml = dataset_md.xml
+ del dataset_md
+
+ # Parse the XML
+ parser = etree.XMLParser(encoding='UTF-8', remove_blank_text=True)
+ target_tree = etree.parse(StringIO(dataset_md_xml), parser=parser)
+ target_root = target_tree.getroot()
+ del parser, dataset_md_xml
+
+ if target_root.find("Esri") is not None:
+ Esri = target_root.xpath("./Esri")[0]
+ _xml = '''
+ ISO 19139 Metadata Implementation Specification GML3.2
+ ISO19139
+
+
+
+ '''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ # Merge Target wtih Source
+ target_source_merge = xml_tree_merge(Esri, _root)
+ #print(etree.tostring(target_source_merge, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ # Merge Source wtih Target
+ source_target_merge = xml_tree_merge(target_source_merge, Esri)
+ #print(etree.tostring(source_target_merge, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ Esri.getparent().replace(Esri, source_target_merge)
+ del target_source_merge, source_target_merge
+ del _root, _xml
+ #print(etree.tostring(target_root.find("Esri"), encoding='UTF-8', method='xml', pretty_print=True).decode())
+ del Esri
+ else:
+ pass
+
+ # No changes needed below
+ #print(etree.tostring(target_tree, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ etree.indent(target_root, space=' ')
+ dataset_md_xml = etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+
+ SaveBackXml = True
+ if SaveBackXml:
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.xml = dataset_md_xml
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ #dataset_md.reload()
+ del dataset_md
+ else:
+ pass
+ del SaveBackXml
+ del dataset_md_xml
+
+ # Declared Varaiables
+ del dataset_name
+ del target_tree, target_root
+ # Imports
+ del etree, StringIO, BytesIO, copy, md
+ # Function Parameters
+ del dataset_path
+ except KeyboardInterrupt:
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def metadata_dataidinfo_report(dataset_path=""):
+ try:
+ # Imports
+ from lxml import etree
+ from io import StringIO, BytesIO
+ import copy
+ from arcpy import metadata as md
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ import json
+ json_path = rf"{project_folder}\dataIdInfo_dict.json"
+ with open(json_path, "r") as json_file:
+ dataIdInfo_dict = json.load(json_file)
+ del json_file
+ del json_path
+ json_path = rf"{project_folder}\root_dict.json"
+ with open(json_path, "r") as json_file:
+ root_dict = json.load(json_file)
+ del json_file
+ del json_path
+ del json
+
+ #print(dataset_path)
+ dataset_name = os.path.basename(dataset_path)
+ print(f"Reporting on dataIdInfo XML for dataset: '{dataset_name}'")
+ #print(f"\tDataset Location: {os.path.basename(os.path.dirname(dataset_path))}")
+
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ dataset_md.reload()
+ dataset_md_xml = dataset_md.xml
+ del dataset_md
+
+ # Parse the XML
+ parser = etree.XMLParser(encoding='UTF-8', remove_blank_text=True)
+ target_tree = etree.parse(StringIO(dataset_md_xml), parser=parser)
+ target_root = target_tree.getroot()
+ del parser, dataset_md_xml
+
+ #target_root[:] = sorted(target_root, key=lambda x: root_dict[x.tag])
+ #for child in target_root:
+ # #print(child.tag)
+ # #print(etree.tostring(child, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ # del child
+ # Esri
+ # dataIdInfo
+ # dqInfo
+ # distInfo
+ # mdContact
+ # mdLang
+ # mdChar
+ # mdDateSt
+ # mdHrLv
+ # mdHrLvName
+ # mdFileID
+ # mdMaint
+ # refSysInfo
+ # spatRepInfo
+ # spdoinfo
+ # eainfo
+
+ refSysInfo = target_root.xpath("./refSysInfo")
+ if len(refSysInfo) == 0:
+ pass #print("missing")
+ elif len(refSysInfo) == 1:
+ pass #print(etree.tostring(target_root.xpath("./refSysInfo")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ elif len(refSysInfo) > 1:
+ pass #print("too many")
+ else:
+ pass
+ del refSysInfo
+
+ dqInfo = target_root.xpath("./dqInfo")
+ if len(dqInfo) == 0:
+ _xml = '''
+
+
+
+
+
+ dataset
+
+
+
+ '''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root.insert(root_dict["dqInfo"], _root)
+ del _root, _xml
+ else:
+ pass
+ del dqInfo
+
+ mdHrLv = target_root.xpath("./mdHrLv")
+ if len(mdHrLv) == 0:
+ _xml = '''
+
+
+ '''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root.insert(root_dict["mdHrLv"], _root)
+ del _root, _xml
+ else:
+ pass
+ del mdHrLv
+
+ mdHrLvName = target_root.xpath("./mdHrLvName")
+ if len(mdHrLvName) == 0:
+ _xml = '''dataset'''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root.insert(root_dict["mdHrLvName"], _root)
+ del _root, _xml
+ else:
+ pass
+ del mdHrLvName
+
+ fgdcGeoform = target_root.xpath("./dataIdInfo/idCitation/presForm/fgdcGeoform")
+ if len(fgdcGeoform) == 0:
+ _xml = '''document'''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root.xpath("./dataIdInfo/idCitation/presForm")[0].insert(dataIdInfo_dict["fgdcGeoform"], _root)
+ del _root, _xml
+ else:
+ pass
+ del fgdcGeoform
+
+ #print(etree.tostring(target_root.xpath("./distInfo")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+
+ distInfo = target_root.xpath("./distInfo")
+ if len(distInfo) == 0:
+ _xml = '''
+
+ ESRI REST Service
+
+ Uncompressed
+
+
+
+ '''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root.xpath("./dataIdInfo/idCitation/presForm")[0].insert(dataIdInfo_dict["fgdcGeoform"], _root)
+ del _root, _xml
+ elif len(distInfo) == 1:
+ formatVer = distInfo[0].xpath("./distFormat/formatVer")
+ if len(formatVer) == 0:
+ _xml = ''''''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root.xpath("./distInfo/distFormat")[0].insert(1, _root)
+ del _root, _xml
+ elif len(formatVer) == 1:
+ pass
+ elif len(formatVer) > 1:
+ for i in range(1, len(formatVer)):
+ formatVer[i].getparent().remove(formatVer[i])
+ del i
+ else:
+ pass
+ del formatVer
+
+ fileDecmTech = distInfo[0].xpath("./distFormat/fileDecmTech")
+ if len(fileDecmTech) == 0:
+ _xml = ''''''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root.xpath("./distInfo/distFormat")[0].insert(2, _root)
+ del _root, _xml
+ elif len(fileDecmTech) == 1:
+ pass
+ elif len(fileDecmTech) > 1:
+ for i in range(1, len(fileDecmTech)):
+ fileDecmTech[i].getparent().remove(fileDecmTech[i])
+ del i
+ else:
+ pass
+ del fileDecmTech
+
+ formatInfo = distInfo[0].xpath("./distFormat/formatInfo")
+ if len(formatInfo) == 0:
+ _xml = ''''''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root.xpath("./distInfo/distFormat")[0].insert(2, _root)
+ del _root, _xml
+ elif len(formatInfo) == 1:
+ pass
+ elif len(formatInfo) > 1:
+ for i in range(1, len(formatInfo)):
+ formatInfo[i].getparent().remove(formatInfo[i])
+ del i
+ else:
+ pass
+ del formatInfo
+ else:
+ pass
+ del distInfo
+
+ #print(etree.tostring(target_root.xpath("./mdHrLv")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #print(etree.tostring(target_root.xpath("./mdHrLvName")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #print(etree.tostring(target_root.xpath("./dqInfo/dqScope")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #print(etree.tostring(target_root.xpath("./dataIdInfo/idCitation/presForm")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #print(etree.tostring(target_root.xpath("./distInfo")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+
+ #raise Exception
+ mdHrLvName = target_root.xpath("./mdHrLvName")
+ if len(mdHrLvName) == 0:
+ _xml = '''dataset'''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root.insert(root_dict["mdHrLvName"], _root)
+ del _root, _xml
+ else:
+ pass
+ del mdHrLvName
+
+ target_root.xpath("./dataIdInfo/envirDesc")[0].set('Sync', "TRUE")
+ #target_root.xpath("./dqInfo/dqScope/scpLvl/ScopeCd")[0].set('value', "005")
+ #target_root.xpath("./dqInfo/dqScope/scpLvl/ScopeCd")[0].set('Sync', "TRUE")
+ mdHrLvName = target_root.xpath("./mdHrLvName")[0]
+ ScopeCd = target_root.xpath("./dqInfo/dqScope/scpLvl/ScopeCd")[0]
+ PresFormCd = target_root.xpath("./dataIdInfo/idCitation/presForm/PresFormCd")[0]
+ fgdcGeoform = target_root.xpath("./dataIdInfo/idCitation/presForm/fgdcGeoform")[0]
+ SpatRepTypCd = target_root.xpath("./dataIdInfo/spatRpType/SpatRepTypCd")[0]
+ PresFormCd.set('Sync', "TRUE")
+ #print("------" * 10)
+ #print(etree.tostring(SpatRepTypCd, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #print(etree.tostring(target_root.xpath("./dataIdInfo/idCitation/presForm")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #print(etree.tostring(target_root.xpath("./dqInfo/dqScope")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #print("------" * 10)
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ # SpatRepTypCd "Empty" "001" (vector) "002" (raster/grid) "003" (tabular)
+ # ScopeCd "005" "005" "007"
+ # PresFormCd "005" "003" "011"
+ # fgdcGeoform "vector data" "raster data" "tabular data"
+ # mdHrLvName "vector data" "raster data" "tabular data"
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ datasetSet = target_root.xpath("./dqInfo/dqScope/scpLvlDesc/datasetSet")
+ if len(datasetSet) == 0:
+ _xml = ''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root.xpath("./dqInfo/dqScope")[0].insert(1, _root)
+ del _root, _xml
+ else:
+ pass
+ datasetSet = target_root.xpath("./dqInfo/dqScope/scpLvlDesc/datasetSet")[0]
+ if SpatRepTypCd.get("value") == "001":
+ ScopeCd.set('value', "005")
+ PresFormCd.set("value", "005")
+ fgdcGeoform.text = "vector digital data"
+ datasetSet.text = "Vector Digital Data"
+ mdHrLvName.text = "Vector Digital Data"
+ elif SpatRepTypCd.get("value") == "002":
+ ScopeCd.set('value', "005")
+ PresFormCd.set("value", "003")
+ fgdcGeoform.text = "raster digital data"
+ datasetSet.text = "Raster Digital Data"
+ mdHrLvName.text = "Raster Digital Data"
+ elif SpatRepTypCd.get("value") == "003":
+ ScopeCd.set('value', "007")
+ PresFormCd.set("value", "011")
+ fgdcGeoform.text = "tabular digital data"
+ datasetSet.text = "Tabular Digital Data"
+ mdHrLvName.text = "Tabular Digital Data"
+ else:
+ pass
+ #print("------" * 10)
+ #print(etree.tostring(SpatRepTypCd, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #print(etree.tostring(target_root.xpath("./dataIdInfo/idCitation/presForm")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #print(etree.tostring(target_root.xpath("./dqInfo/dqScope")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #print("------" * 10)
+ del datasetSet, SpatRepTypCd, fgdcGeoform, PresFormCd, ScopeCd, mdHrLvName
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ formatName = target_root.xpath("./distInfo/distFormat/formatName")[0]
+ envirDesc = target_root.xpath("./dataIdInfo/envirDesc")[0]
+ envirDesc.set('Sync', "TRUE")
+ target_root.xpath("./distInfo/distFormat/fileDecmTech")[0].text = "Uncompressed"
+ formatName.text = "ESRI REST Service"
+ formatVer_text = str.rstrip(str.lstrip(envirDesc.text))
+ formatVer = target_root.xpath("./distInfo/distFormat/formatVer")[0]
+ formatVer.text = str.rstrip(str.lstrip(formatVer_text))
+ del formatVer_text
+ del envirDesc
+ del formatVer
+ del formatName
+
+ xml_file = b'''
+ external
+ c66ffbb333c48d18d81856ec0e0c37ea752bff1a
+ Melissa Ann Karp
+ NMFS Office of Science and Technology
+ Fisheries Science Coordinator
+
+
+ 1315 East West Hwy
+ Silver Spring
+ MD
+ 20910-3282
+ melissa.karp@noaa.gov
+ US
+
+
+ 301-427-8202
+ 301-713-4137
+
+ 0700 - 1800 EST/EDT
+
+ https://www.fisheries.noaa.gov/about/office-science-and-technology
+ REST Service
+ NMFS Office of Science and Technology
+ NOAA Fisheries Office of Science and Technology
+
+
+
+
+
+ Melissa Ann Karp
+ True
+
+
+
+
+ '''
+ _tree = etree.parse(BytesIO(xml_file), etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ _root = _tree.getroot()
+ idPoC = target_root.xpath(f"./dataIdInfo/idPoC")
+ if len(idPoC) == 0:
+ target_root.xpath(f"./dataIdInfo")[0].insert(dataIdInfo_dict["idPoC"], _root)
+ elif len(idPoC) == 1:
+ idPoC[0].getparent().replace(idPoC[0], _root)
+ else:
+ pass
+ del _root, _tree, xml_file
+ #print(f"\n\t{etree.tostring(target_root.xpath(f'./dataIdInfo/idPoC')[0], encoding='UTF-8', method='xml', pretty_print=True).decode()}\n")
+ del idPoC
+
+ xml_file = b'''
+ external
+ 579ce2e21b888ac8f6ac1dac30f04cddec7a0d7c
+ NMFS Office of Science and Technology
+ NMFS Office of Science and Technology
+ GIS App Developer
+
+
+ 1315 East West Highway
+ Silver Spring
+ MD
+ 20910-3282
+ US
+ tim.haverland@noaa.gov
+
+
+ 301-427-8137
+ 301-713-4137
+
+ 0700 - 1800 EST/EDT
+
+ https://www.fisheries.noaa.gov/about/office-science-and-technology
+ REST Service
+ NMFS Office of Science and Technology
+ NOAA Fisheries Office of Science and Technology
+
+
+
+
+
+ NMFS Office of Science and Technology (Distributor)
+ True
+
+
+
+
+ '''
+ _tree = etree.parse(BytesIO(xml_file), etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ _root = _tree.getroot()
+ citRespParty = target_root.xpath(f"./dataIdInfo/idCitation/citRespParty")
+ if len(citRespParty) == 0:
+ target_root.xpath(f"./dataIdInfo/idCitation")[0].insert(dataIdInfo_dict["citRespParty"], _root)
+ elif len(citRespParty) == 1:
+ citRespParty[0].getparent().replace(citRespParty[0], _root)
+ else:
+ pass
+ del _root, _tree, xml_file
+ #print(f"\n\t{etree.tostring(target_root.xpath(f'./dataIdInfo/idCitation/citRespParty')[0], encoding='UTF-8', method='xml', pretty_print=True).decode()}\n")
+ del citRespParty
+
+ resConst = target_root.xpath("./dataIdInfo/resConst")
+ if len(resConst) == 1:
+ xml_file = b'''
+
+
+
+
+
+
+
+ Data License: CC0-1.0
+Data License URL: https://creativecommons.org/publicdomain/zero/1.0/
+Data License Statement: These data were produced by NOAA and are not subject to copyright protection in the United States. NOAA waives any potential copyright and related rights in these data worldwide through the Creative Commons Zero 1.0 Universal Public Domain Dedication (CC0-1.0).
+
+
+
+
+
+
+ FISMA Low
+
+
+ <DIV STYLE="text-align:Left;"><DIV><DIV><P><SPAN>***No Warranty*** The user assumes the entire risk related to its use of these data. NMFS is providing these data 'as is' and NMFS disclaims any and all warranties, whether express or implied, including (without limitation) any implied warranties of merchantability or fitness for a particular purpose. No warranty expressed or implied is made regarding the accuracy or utility of the data on any other system or for general or scientific purposes, nor shall the act of distribution constitute any such warranty. It is strongly recommended that careful attention be paid to the contents of the metadata file associated with these data to evaluate dataset limitations, restrictions or intended use. In no event will NMFS be liable to you or to any third party for any direct, indirect, incidental, consequential, special or exemplary damages or lost profit resulting from any use or misuse of these data.</SPAN></P></DIV></DIV></DIV>
+
+ '''
+ _tree = etree.parse(BytesIO(xml_file), etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ _root = _tree.getroot()
+ resConst[0].getparent().replace(resConst[0], _root)
+ del _root, _tree, xml_file
+ else:
+ pass
+ del resConst
+
+ dataIdInfo = target_root.xpath("./dataIdInfo")
+ for data_Id_Info in dataIdInfo:
+ data_Id_Info[:] = sorted(data_Id_Info, key=lambda x: dataIdInfo_dict[x.tag])
+ #print(etree.tostring(data_Id_Info, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ del data_Id_Info
+ del dataIdInfo
+
+ # No changes needed below
+ #print(etree.tostring(target_tree, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ etree.indent(target_root, space=' ')
+ dataset_md_xml = etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+
+ SaveBackXml = True
+ if SaveBackXml:
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.xml = dataset_md_xml
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ #dataset_md.reload()
+ del dataset_md
+ else:
+ pass
+ del SaveBackXml
+ del dataset_md_xml
+
+ # Declared Varaiables
+ del dataset_name
+ del dataIdInfo_dict, root_dict
+ del target_tree, target_root
+ # Imports
+ del etree, StringIO, BytesIO, copy, md
+ # Function Parameters
+ del dataset_path
+ except KeyboardInterrupt:
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ raise SystemExit
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk:
+ print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
+ return True
+ finally:
+ pass
+
+def metadata_dq_info_report(dataset_path=""):
+ try:
+ # Imports
+ from lxml import etree
+ from io import StringIO, BytesIO
+ import copy
+ from arcpy import metadata as md
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ import json
+ json_path = rf"{project_folder}\dqInfo_dict.json"
+ with open(json_path, "r") as json_file:
+ dqInfo_dict = json.load(json_file)
+ del json_file
+ del json_path
+ del json
+
+ export_folder = rf"{os.path.dirname(os.path.dirname(dataset_path))}\Export"
+
+ #print(dataset_path)
+ dataset_name = os.path.basename(dataset_path)
+ print(f"Reporting on dqInfo XML for dataset: '{dataset_name}'")
+ #print(f"\tDataset Location: {os.path.basename(os.path.dirname(dataset_path))}")
+
+ dataset_md = md.Metadata(dataset_path)
+ #dataset_md.synchronize("ALWAYS")
+ #dataset_md.save()
+ #dataset_md.reload()
+ dataset_md_xml = dataset_md.xml
+ del dataset_md
+
+ # Parse the XML
+ parser = etree.XMLParser(encoding='UTF-8', remove_blank_text=True)
+ target_tree = etree.parse(StringIO(dataset_md_xml), parser=parser)
+ target_root = target_tree.getroot()
+ del parser, dataset_md_xml
+
+ # dqInfo
+ dqInfo = target_root.xpath("./dqInfo")[0]
+ dqInfo[:] = sorted(dqInfo, key=lambda x: dqInfo_dict[x.tag])
+ #print(etree.tostring(dqInfo, encoding='UTF-8', method='xml', pretty_print=True).decode())
+
+ #target_root[:] = sorted(target_root, key=lambda x: dqInfo_dict[x.tag])
+ #for child in target_root:
+ # #print(child.tag)
+ # #print(etree.tostring(child, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ # del child
+ # Esri
+ # dataIdInfo
+ # dqInfo
+ # distInfo
+ # mdContact
+ # mdLang
+ # mdChar
+ # mdDateSt
+ # mdHrLv
+ # mdHrLvName
+ # mdFileID
+ # mdMaint
+ # refSysInfo
+ # spatRepInfo
+ # spdoinfo
+ # eainfo
+
+ _report = target_root.xpath(f"./dqInfo/report[@type='DQConcConsis']")
+ if len(_report) == 1:
+ _xml = '''
+ Based on a review from DisMAP Team all necessary features are present.
+
+
+
+ Conceptual Consistency Report
+
+ NMFS OST DisMAP
+
+
+
+
+
+
+ Based on a review from DisMAP Team all necessary features are present.
+ 1
+
+
+ '''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ #print(f"{etree.tostring(_root, encoding='UTF-8', method='xml', pretty_print=True).decode()}")
+ #raise SystemExit
+ _report[0].getparent().replace(_report[0], _root)
+ del _root, _xml
+ else:
+ pass
+ del _report
+
+ _report = target_root.xpath(f"./dqInfo/report[@type='DQCompOm']")
+ if len(_report) == 1:
+ _xml = '''
+ Based on a review from DisMAP Team all necessary features are present.
+
+
+
+ Completeness Report
+
+ NMFS OST DisMAP
+
+
+
+
+
+
+ Based on a review from DisMAP Team all necessary features are present.
+ 1
+
+
+ '''
+ _root = etree.XML(_xml, etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ _report[0].getparent().replace(_report[0], _root)
+ del _root, _xml
+ else:
+ pass
+ del _report
+
+ dqInfo = target_root.xpath("./dqInfo")
+ #print(len(dqInfo))
+ for dq_Info in dqInfo:
+ dq_Info[:] = sorted(dq_Info, key=lambda x: dqInfo_dict[x.tag])
+ #print(etree.tostring(dq_Info, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ del dq_Info
+ del dqInfo
+
+ #print(len(target_root.xpath("./dqInfo")))
+ for i in range(1, len(target_root.xpath("./dqInfo"))):
+ dq_Info = target_root.xpath("./dqInfo")[i] #.write(rf"{export_folder}\{os.path.basename(dataset_path)} dqInfo.xml", encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+ # Writing to a new file
+ file = open(rf"{export_folder}\{os.path.basename(dataset_path)} dqInfo.xml", "w")
+ file.write(etree.tostring(dq_Info, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ file.close()
+ del file
+ dq_Info.getparent().remove(dq_Info)
+ del dq_Info
+ del i
+
+ # No changes needed below
+ #print(etree.tostring(target_tree, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ etree.indent(target_root, space=' ')
+ dataset_md_xml = etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+
+ SaveBackXml = True
+ if SaveBackXml:
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.xml = dataset_md_xml
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ #dataset_md.reload()
+ del dataset_md
+ else:
+ pass
+ del SaveBackXml
+ del dataset_md_xml
+
+ # Declared Varaiables
+ del export_folder
+ del dataset_name
+ del dqInfo_dict
+ del target_tree, target_root
+ # Imports
+ del etree, StringIO, BytesIO, copy, md
+ # Function Parameters
+ del dataset_path
+ except KeyboardInterrupt:
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def metadata_dist_info_report(dataset_path=""):
+ try:
+ # Imports
+ from lxml import etree
+ from io import StringIO, BytesIO
+ import copy
+ from arcpy import metadata as md
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ import json
+ json_path = rf"{project_folder}\distInfo_dict.json"
+ with open(json_path, "r") as json_file:
+ distInfo_dict = json.load(json_file)
+ del json_file
+ del json_path
+ del json
+
+ project = os.path.basename(os.path.dirname(os.path.dirname(dataset_path)))
+ export_folder = rf"{os.path.dirname(os.path.dirname(dataset_path))}\Export"
+
+ #print(dataset_path)
+ dataset_name = os.path.basename(dataset_path)
+ print(f"Reporting on distInfo XML for dataset: '{dataset_name}'")
+ #print(f"\tDataset Location: {os.path.basename(os.path.dirname(dataset_path))}")
+
+ dataset_md = md.Metadata(dataset_path)
+ #dataset_md.synchronize("ALWAYS")
+ #dataset_md.save()
+ #dataset_md.reload()
+ dataset_md_xml = dataset_md.xml
+ del dataset_md
+
+ # Parse the XML
+ parser = etree.XMLParser(encoding='UTF-8', remove_blank_text=True)
+ target_tree = etree.parse(StringIO(dataset_md_xml), parser=parser)
+ target_root = target_tree.getroot()
+ del parser, dataset_md_xml
+
+ #target_root[:] = sorted(target_root, key=lambda x: distInfo_dict[x.tag])
+ #for child in target_root:
+ # #print(child.tag)
+ # #print(etree.tostring(child, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ # del child
+ # Esri
+ # dataIdInfo
+ # dqInfo
+ # distInfo
+ # mdContact
+ # mdLang
+ # mdChar
+ # mdDateSt
+ # mdHrLv
+ # mdHrLvName
+ # mdFileID
+ # mdMaint
+ # refSysInfo
+ # spatRepInfo
+ # spdoinfo
+ # eainfo
+
+ distorTran = target_root.xpath("//distorTran")
+ for _distorTran in distorTran:
+ _distorTran.tag = "distTranOps"
+ del _distorTran
+ del distorTran
+
+## target_root.xpath("./distInfo/distFormat/formatName")[0].set('Sync', "FALSE")
+## target_root.xpath("./dataIdInfo/envirDesc")[0].set('Sync', "TRUE")
+## #target_root.xpath("./dqInfo/dqScope/scpLvl/ScopeCd")[0].set('value', "005")
+## PresFormCd = target_root.xpath("./dataIdInfo/idCitation/presForm/PresFormCd")[0]
+## fgdcGeoform = target_root.xpath("./dataIdInfo/idCitation/presForm/fgdcGeoform")[0]
+## SpatRepTypCd = target_root.xpath("./dataIdInfo/spatRpType/SpatRepTypCd")[0]
+## PresFormCd.set('Sync', "TRUE")
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # SpatRepTypCd "Empty" "001" (vector) "002" (raster/grid) "003" (tabular)
+## # PresFormCd "005" "003" "011"
+## # fgdcGeoform "vector data" "raster data" "tabular data"
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## datasetSet = target_root.xpath("./dqInfo/dqScope/scpLvlDesc/datasetSet")[0]
+## if SpatRepTypCd.get("value") == "001":
+## PresFormCd.set("value", "005")
+## fgdcGeoform.text = "vector digital data"
+## datasetSet.text = "Vector Digital Data"
+## elif SpatRepTypCd.get("value") == "002":
+## PresFormCd.set("value", "003")
+## fgdcGeoform.text = "raster digital data"
+## datasetSet.text = "Raster Digital Data"
+## elif SpatRepTypCd.get("value") == "003":
+## PresFormCd.set("value", "011")
+## fgdcGeoform.text = "tabular digital data"
+## datasetSet.text = "Tabular Digital Data"
+## else:
+## pass
+## #print("------" * 10)
+## #print(etree.tostring(SpatRepTypCd, encoding='UTF-8', method='xml', pretty_print=True).decode())
+## #print(etree.tostring(target_root.xpath("./dataIdInfo/idCitation/presForm")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## #print(etree.tostring(target_root.xpath("./dqInfo/dqScope")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+## #print("------" * 10)
+## del datasetSet, SpatRepTypCd, fgdcGeoform, PresFormCd
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+## formatName = target_root.xpath("./distInfo/distFormat/formatName")[0]
+## envirDesc = target_root.xpath("./dataIdInfo/envirDesc")[0]
+## envirDesc.set('Sync', "TRUE")
+## target_root.xpath("./distInfo/distFormat/fileDecmTech")[0].text = "Uncompressed"
+## formatName.text = "ESRI REST Service"
+## formatVer_text = str.rstrip(str.lstrip(envirDesc.text))
+## formatVer = target_root.xpath("./distInfo/distFormat/formatVer")[0]
+## formatVer.text = str.rstrip(str.lstrip(formatVer_text))
+## del formatVer_text
+## del envirDesc
+## del formatVer
+## del formatName
+
+ xml_file = b'''
+
+ external
+ 579ce2e21b888ac8f6ac1dac30f04cddec7a0d7c
+ NMFS Office of Science and Technology
+ NMFS Office of Science and Technology
+ GIS App Developer
+
+
+ 1315 East West Highway
+ Silver Spring
+ MD
+ 20910-3282
+ US
+ tim.haverland@noaa.gov
+
+
+ 301-427-8137
+ 301-713-4137
+
+ 0700 - 1800 EST/EDT
+
+ https://www.fisheries.noaa.gov/about/office-science-and-technology
+ REST Service
+ NMFS Office of Science and Technology
+ NOAA Fisheries Office of Science and Technology
+
+
+
+
+
+ NMFS Office of Science and Technology (Distributor)
+ True
+
+
+
+
+
+ '''
+ _tree = etree.parse(BytesIO(xml_file), etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ _root = _tree.getroot()
+ distributor = target_root.xpath(f"./distInfo/distributor")
+ if len(distributor) == 0:
+ target_root.xpath(f"./distInfo")[0].insert(distInfo_dict["distributor"], _root)
+ elif len(distributor) == 1:
+ distributor[0].getparent().replace(distributor[0], _root)
+ else:
+ pass
+ del _root, _tree, xml_file
+ #print(f"\n\t{etree.tostring(target_root.xpath(f'./distInfo/distributor')[0], encoding='UTF-8', method='xml', pretty_print=True).decode()}\n")
+ del distributor
+
+## new_item_name = target_root.find("./Esri/DataProperties/itemProps/itemName").text
+## new_item_name = new_item_name.replace("IDW_Sample_Locations", "Sample_Locations") if "Sample_Locations" in new_item_name else new_item_name
+## onLineSrcs = target_root.findall("./distInfo/distTranOps/onLineSrc")
+## for onLineSrc in onLineSrcs:
+## if onLineSrc.find('./protocol').text == "ESRI REST Service":
+## old_linkage_element = onLineSrc.find('./linkage')
+## old_linkage = old_linkage_element.text
+## #print(old_linkage, flush=True)
+## old_item_name = old_linkage[old_linkage.find("/services/")+len("/services/"):old_linkage.find("/FeatureServer")]
+## new_linkage = old_linkage.replace(old_item_name, f"{new_item_name}_{date_code(project)}")
+## #print(new_linkage, flush=True)
+## old_linkage_element.text = new_linkage
+## #print(old_linkage_element.text, flush=True)
+## del old_linkage_element
+## del old_item_name, old_linkage, new_linkage
+## else:
+## pass
+## del onLineSrc
+## del onLineSrcs, new_item_name
+## #print(etree.tostring(target_root.xpath("./distInfo")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+
+ new_item_name = target_root.find("./Esri/DataProperties/itemProps/itemName").text
+ if "Sample_Locations" in new_item_name:
+ _new_item_name = new_item_name.replace("IDW_Sample_Locations", "Sample_Locations") if "Sample_Locations" in new_item_name else new_item_name
+ onLineSrcs = target_root.findall("./distInfo/distTranOps/onLineSrc")
+ for onLineSrc in onLineSrcs:
+ onLineSrc.find('./protocol').text = "ESRI REST Service"
+ old_linkage_element = onLineSrc.find('./linkage')
+ old_linkage = old_linkage_element.text
+ #print(old_linkage, flush=True)
+ old_item_name = old_linkage[old_linkage.find("/services/")+len("/services/"):old_linkage.find("/FeatureServer")]
+ if old_item_name != f"{_new_item_name}_{date_code(project)}":
+ #print('remove')
+ onLineSrc.getparent().remove(onLineSrc)
+ else:
+ pass
+ #print(old_item_name, f"{_new_item_name}_{date_code(project)}")
+ #new_linkage = old_linkage.replace(old_item_name, f"{_new_item_name}_{date_code(project)}")
+ #print(new_linkage, flush=True)
+ #old_linkage_element.text = new_linkage
+ #print(old_linkage_element.text, flush=True)
+ del old_linkage_element
+ del old_item_name, old_linkage #, new_linkage
+ #print(etree.tostring(onLineSrc, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #if onLineSrc.find('./protocol').text == "ESRI REST Service":
+ # old_linkage_element = onLineSrc.find('./linkage')
+ # old_linkage = old_linkage_element.text
+ # #print(old_linkage, flush=True)
+ # old_item_name = old_linkage[old_linkage.find("/services/")+len("/services/"):old_linkage.find("/FeatureServer")]
+ # new_linkage = old_linkage.replace(old_item_name, f"{new_item_name}_{date_code(project)}")
+ # #print(new_linkage, flush=True)
+ # old_linkage_element.text = new_linkage
+ # #print(old_linkage_element.text, flush=True)
+ # del old_linkage_element
+ # del old_item_name, old_linkage, new_linkage
+ #else:
+ # pass
+ del onLineSrc
+ #print(_new_item_name)
+ del onLineSrcs, _new_item_name
+ else:
+ pass
+ #print(etree.tostring(target_root.xpath("./distInfo")[0], encoding='UTF-8', method='xml', pretty_print=True).decode())
+ #print(new_item_name)
+ del new_item_name
+
+ distInfo = target_root.xpath("./distInfo")
+ #print(len(distInfo))
+ for dist_Info in distInfo:
+ dist_Info[:] = sorted(dist_Info, key=lambda x: distInfo_dict[x.tag])
+ #print(etree.tostring(dist_Info, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ del dist_Info
+ del distInfo
+
+ # No changes needed below
+ #print(etree.tostring(target_tree, encoding='UTF-8', method='xml', pretty_print=True).decode())
+ etree.indent(target_root, space=' ')
+ dataset_md_xml = etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+
+ SaveBackXml = True
+ if SaveBackXml:
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.xml = dataset_md_xml
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ #dataset_md.reload()
+ del dataset_md
+ else:
+ pass
+ del SaveBackXml
+ del dataset_md_xml
+
+ # Declared Varaiables
+ del export_folder, project
+ del dataset_name
+ del distInfo_dict
+ del target_tree, target_root
+ # Imports
+ del etree, StringIO, BytesIO, copy, md
+ # Function Parameters
+ del dataset_path
+ except KeyboardInterrupt:
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def main(project_gdb=""):
+ try:
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ print(f"{'-' * 80}")
+ print(f"Python Script: {os.path.basename(__file__)}")
+ print(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ print(f"Python Version: {sys.version}")
+ print(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ print(f"{'-' * 80}\n")
+
+ #Imports
+ from lxml import etree
+ from io import StringIO, BytesIO
+ #import copy
+ #import arcpy
+ from arcpy import metadata as md
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ # Test if passed workspace exists, if not raise SystemExit
+ if not arcpy.Exists(project_gdb):
+ print(f"{os.path.basename(project_gdb)} is missing!!")
+ print(f"{project_gdb}")
+
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{project_folder}\Scratch"
+ #print(project_folder)
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+
+ #metadata_dictionary = dataset_title_dict(project_gdb)
+ #for key in metadata_dictionary:
+ # print(key, metadata_dictionary[key])
+ # del key
+
+ datasets = list()
+ walk = arcpy.da.Walk(arcpy.env.workspace)
+ for dirpath, dirnames, filenames in walk:
+ for filename in filenames:
+ datasets.append(os.path.join(dirpath, filename))
+ del filename
+ del dirpath, dirnames, filenames
+ del walk
+
+ del scratch_folder, project_folder
+
+ print(f"Processing: {os.path.basename(arcpy.env.workspace)} in the '{inspect.stack()[0][3]}' function")
+
+ # Points
+ #for dataset_path in sorted([ds for ds in datasets if ds.endswith("AI_Sample_Locations") or ds.endswith("AI_IDW_Sample_Locations")]):
+ #for dataset_path in sorted([ds for ds in datasets if ds.endswith("_Sample_Locations")]):
+ #for dataset_path in sorted([ds for ds in datasets if ds.endswith("EBS_Sample_Locations") or ds.endswith("EBS_IDW_Sample_Locations")]):
+ # Polylines
+ #for dataset_path in sorted([ds for ds in datasets if ds.endswith("AI_Boundary") or ds.endswith("AI_IDW_Boundary")]):
+ # Polygons
+ #for dataset_path in sorted([ds for ds in datasets if ds.endswith("AI_Region") or ds.endswith("AI_IDW_Region")]):
+ # Table
+ #for dataset_path in sorted([ds for ds in datasets if ds.endswith("AI_Indicators") or ds.endswith("AI_IDW_Indicators")]):
+ #for dataset_path in sorted([ds for ds in datasets if ds.endswith("Indicators")]):
+ # Raster
+ #for dataset_path in sorted([ds for ds in datasets if ds.endswith("AI_Bathymetry") or ds.endswith("AI_IDW_Bathymetry")]):
+ #for dataset_path in sorted([ds for ds in datasets if ds.endswith("AI_Raster_Mask") or ds.endswith("AI_IDW_Raster_Mask")]):
+ #for dataset_path in sorted([ds for ds in datasets if ds.endswith("AI_Raster_Mosaic") or ds.endswith("AI_IDW_Mosaic")]):
+ #for dataset_path in sorted([ds for ds in datasets if any (ds.endswith(d) for d in ["Datasets", "AI_IDW_Extent_Points", "AI_IDW_Latitude", "AI_IDW_Raster_Mask"])]):
+ #for dataset_path in sorted([ds for ds in datasets if "AI_IDW" in os.path.basename(ds)]):
+ #for dataset_path in sorted([ds for ds in datasets if "EBS_IDW" in os.path.basename(ds)]):
+ #for dataset_path in sorted([ds for ds in datasets if any (ds.endswith(d) for d in ["Species_Filer", "EBS_IDW_Extent_Points", "EBS_IDW_Latitude", "EBS_IDW_Raster_Mask"])]):
+ #for dataset_path in sorted([ds for ds in datasets if any (ds.endswith(d) for d in ['SpeciesPersistenceIndicatorNetWTCPUE', 'SpeciesPersistenceIndicatorPercentileWTCPUE', 'Species_Filter', 'DisMAP_Survey_Info'])]):
+ for dataset_path in sorted([ds for ds in datasets if any (ds.endswith(d) for d in ['Species_Filter'])]):
+
+ # ALL
+ #for dataset_path in sorted(datasets):
+ #dataset_name = os.path.basename(dataset_path)
+ #print(f"Dataset: '{dataset_name}'\n\tType: '{arcpy.Describe(dataset_path).datasetType}'")
+ #del dataset_name
+
+ ImportBasicTemplateXml = True
+ if ImportBasicTemplateXml:
+ import_basic_template_xml(dataset_path)
+ else:
+ pass
+ del ImportBasicTemplateXml
+
+ BasicMetadataReport = False # Just a report
+ if BasicMetadataReport:
+ basic_metadata_report(dataset_path)
+ else:
+ pass
+ del BasicMetadataReport
+
+ MetadataEsriReport = False
+ if MetadataEsriReport:
+ metadata_esri_report(dataset_path)
+ else:
+ pass
+ del MetadataEsriReport
+
+ MetadataDataIdInfoReport = False
+ if MetadataDataIdInfoReport:
+ metadata_dataidinfo_report(dataset_path)
+ else:
+ pass
+ del MetadataDataIdInfoReport
+
+ MetadataDqInfoReport = False
+ if MetadataDqInfoReport:
+ metadata_dq_info_report(dataset_path)
+ else:
+ pass
+ del MetadataDqInfoReport
+
+ MetadataDistInfoReport = False
+ if MetadataDistInfoReport:
+ metadata_dist_info_report(dataset_path)
+ else:
+ pass
+ del MetadataDistInfoReport
+
+ UpdateEaInfoXmlElements = False
+ if UpdateEaInfoXmlElements:
+ update_eainfo_xml_elements(dataset_path)
+ else:
+ pass
+ del UpdateEaInfoXmlElements
+
+ AddUpdateDates = False
+ if AddUpdateDates:
+ add_update_dates(dataset_path)
+ else:
+ pass
+ del AddUpdateDates
+
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ # A keeper. Adds entity attribute details, if missing
+ InsertMissingElements = False
+ if InsertMissingElements:
+ insert_missing_elements(dataset_path)
+ else:
+ pass
+ del InsertMissingElements
+
+ AddUpdateContacts = False
+ if AddUpdateContacts:
+ add_update_contacts(dataset_path=dataset_path)
+ else:
+ pass
+ del AddUpdateContacts
+
+ CreateFeatureClassLayers = False
+ if CreateFeatureClassLayers:
+ create_feature_class_layers(dataset_path=dataset_path)
+ else:
+ pass
+ del CreateFeatureClassLayers
+
+ PrintTargetTree = False
+ if PrintTargetTree:
+ dataset_md = md.Metadata(dataset_path)
+ #dataset_md.synchronize("ALWAYS")
+ #dataset_md.save()
+ #dataset_md.reload()
+ # Parse the XML
+ export_folder = rf"{os.path.dirname(os.path.dirname(dataset_path))}\Export"
+ #print(export_folder)
+ #_target_tree = etree.parse(StringIO(dataset_md.xml), parser=etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ #etree.indent(_target_tree, " ")
+ #_target_tree.write(rf"{export_folder}\{os.path.basename(dataset_path)}.xml", encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+ #print(etree.tostring(_target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True).decode())
+ #del _target_tree
+ dataset_md.saveAsXML(rf"{export_folder}\{os.path.basename(dataset_path)}.xml", "TEMPLATE")
+ del export_folder
+ del dataset_md
+ else:
+ pass
+ del PrintTargetTree
+ del dataset_path
+
+ CompactGDB = False
+ if CompactGDB:
+ print(f"Compact GDB")
+ arcpy.management.Compact(project_gdb)
+ print("\t"+arcpy.GetMessages().replace("\n", "\n\t")+"\n")
+ else:
+ pass
+ del CompactGDB
+
+ # Declared Varaiables
+ del datasets
+ # Imports
+ del etree, StringIO, BytesIO, md
+ # Function Parameters
+ del project_gdb
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ print(f"\n{'-' * 80}")
+ print(f"Python script: {os.path.basename(__file__)}\nCompleted: {strftime('%a %b %d %I:%M %p', localtime())}")
+ print(u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time))))
+ print(f"{'-' * 80}")
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+ except KeyboardInterrupt:
+ raise SystemExit
+ except Exception:
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+if __name__ == '__main__':
+ try:
+ # Append the location of this scrip to the System Path
+ sys.path.append(os.path.dirname(os.path.dirname(__file__)))
+ # Imports
+ base_project_folder = rf"{os.path.dirname(os.path.dirname(__file__))}"
+ #project_name = "April 1 2023"
+ #project_name = "July 1 2024"
+ project_name = "December 1 2024"
+ #project_name = "June 1 2025"
+ project_folder = rf"{base_project_folder}\{project_name}"
+ project_gdb = rf"{project_folder}\{project_name}.gdb"
+
+ main(project_gdb=project_gdb)
+
+ # Declared Variables
+ #del collective_title
+ del project_gdb, project_name, project_folder, base_project_folder
+ # Imports
+ except:
+ traceback.print_exc()
+ else:
+ pass
+ finally:
+ pass
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools_dev/dev_export_arcgis_metadata.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools_dev/dev_export_arcgis_metadata.py
new file mode 100644
index 0000000..0c17a0d
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools_dev/dev_export_arcgis_metadata.py
@@ -0,0 +1,180 @@
+"""
+This module contains . . .
+
+Requires : Python 3.11
+ ArcGIS Pro 3.x
+
+Copyright 2025 NMFS
+Licensed under the Apache License, Version 2.0 (the 'License');
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+ http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an 'AS IS' BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+"""
+# Python Built-in's modules are loaded first
+import os, sys
+import traceback, inspect
+
+def export_metadata(project_gdb="", metadata_workspace=""):
+ try:
+ # Imports
+ # Third-party modules are loaded second
+ import arcpy
+ from arcpy import metadata as md
+ import dev_create_folders
+
+ # Project modules
+ from src.project_tools import pretty_format_xml_file
+
+ # Use all of the cores on the machine
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.env.overwriteOutput = True
+
+ # Define variables
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{project_folder}\Scratch"
+ scratch_gdb = os.path.join(scratch_folder, "scratch.gdb")
+
+ # Set the workspace environment to local file geodatabase
+ arcpy.env.workspace = project_gdb
+ # Set the scratchWorkspace environment to local file geodatabase
+ arcpy.env.scratchWorkspace = scratch_gdb
+
+ # Clean-up variables
+ del scratch_folder, scratch_gdb
+
+ print(f"\n{'--Start' * 10}--\n")
+
+ if not os.path.isdir(rf"{project_folder}\{metadata_workspace}"):
+ dev_create_folders.create_folders(project_folder, [metadata_workspace])
+
+ fcs = arcpy.ListFeatureClasses()
+
+ print(f"Synchronize and export feature classes metadata from Project GDB\n")
+ for fc in sorted(fcs):
+ print(f"Exporting the metadata record for: '{fc}'")
+
+ fc_path = rf"{project_gdb}\{fc}"
+
+ export_xml_metadata_path = rf"{project_folder}\{metadata_workspace}\{fc}.xml"
+
+ dataset_md = md.Metadata(fc_path)
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.title = fc
+ dataset_md.save()
+ dataset_md.reload()
+ dataset_md.saveAsXML(export_xml_metadata_path, "REMOVE_ALL_SENSITIVE_INFO")
+ #if dataset_md.thumbnailUri:
+ # arcpy.management.Copy(dataset_md.thumbnailUri, rf"{metadata_workspace}\{fc} Thumbnail.jpg")
+ # arcpy.management.Copy(dataset_md.thumbnailUri, rf"{metadata_workspace}\{fc} Browse Graphic.jpg")
+
+ del dataset_md
+
+ if os.path.isfile(export_xml_metadata_path):
+ pretty_format_xml_file(export_xml_metadata_path)
+ else:
+ print(F"Problem with '{os.path.basename(export_xml_metadata_path)}'")
+
+ del export_xml_metadata_path
+ del fc, fc_path
+
+ del fcs
+ del project_folder
+
+ print(f"\n{'--End' * 10}--")
+
+ # Imports
+ del md, pretty_format_xml_file, dev_create_folders
+ # Function parameters
+ del project_gdb, metadata_workspace
+
+ except:
+ traceback.print_exc()
+ else:
+ # Cleanup
+ arcpy.management.ClearWorkspaceCache()
+ # Imports
+ del arcpy
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+def main(project_gdb="", metadata_workspace=""):
+ try:
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ print(f"{'-' * 80}")
+ print(f"Python Script: {os.path.basename(__file__)}")
+ print(f"Location: {os.path.dirname(__file__)}")
+ print(f"Python Version: {sys.version} Environment: {os.path.basename(sys.exec_prefix)}")
+ print(f"{'-' * 80}\n")
+
+ export_metadata(project_gdb=project_gdb, metadata_workspace=metadata_workspace)
+
+ # Declared Variables
+
+ # Function parameters
+ del project_gdb, metadata_workspace
+
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+ print(f"\n{'-' * 80}")
+ print(f"Python script: {os.path.basename(__file__)} successfully completed {strftime('%a %b %d %I:%M %p', localtime())}")
+ print(u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time))))
+ print(f"{'-' * 80}")
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+if __name__ == '__main__':
+ try:
+ # Imports
+ from datetime import date
+
+ # Append the location of this scrip to the System Path
+ #sys.path.append(os.path.dirname(__file__))
+ sys.path.append(os.path.dirname(os.path.dirname(__file__)))
+
+ today = date.today()
+ date_string = today.strftime("%Y-%m-%d")
+
+ project_folder = rf"{os.path.dirname(os.path.dirname(__file__))}"
+ project_name = "National Mapper"
+ #project_name = "NMFS_ESA_Range"
+ project_gdb = rf"{project_folder}\{project_name}.gdb"
+ metadata_workspace = f"Export"
+ #metadata_workspace = f"Export {date_string}"
+ #metadata_workspace = f"Export 2025-01-27"
+
+ main(project_gdb=project_gdb, metadata_workspace=metadata_workspace)
+
+ # Declared Variables
+ del project_folder, project_name, project_gdb, metadata_workspace
+ del today, date_string
+ # Imports
+ del date
+
+ except:
+ traceback.print_exc()
+ else:
+ pass
+ finally:
+ pass
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools_dev/dev_zip_and_unzip_csv_data.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools_dev/dev_zip_and_unzip_csv_data.py
new file mode 100644
index 0000000..b94c62b
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools_dev/dev_zip_and_unzip_csv_data.py
@@ -0,0 +1,207 @@
+# -*- coding: utf-8 -*-
+#-------------------------------------------------------------------------------
+# Name: zip_and_unzip_csv_data
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 09/03/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+#-------------------------------------------------------------------------------
+import os, sys # built-ins first
+import traceback
+import importlib
+import inspect
+
+import arcpy # third-parties second
+
+def zip_data(source_folder, selected_files, out_zip_file):
+ try:
+ # Imports
+ import copy
+ from zipfile import ZipFile
+
+ os.chdir(source_folder)
+
+ print(f"Zipping up files into: '{os.path.basename(out_zip_file)}'")
+
+ selected_files = selected_files.split(";")
+
+ source_files = [f for f in os.listdir(source_folder) if f in selected_files]
+ with ZipFile(out_zip_file, mode="w") as archive:
+ for source_file in source_files:
+ archive.write(source_file)
+ del source_file
+ del archive
+ del source_files
+
+ print(f"Done zipping up files into '{os.path.basename(out_zip_file)}'")
+
+ __results = copy.deepcopy(out_zip_file)
+ del out_zip_file
+
+ # Imports
+ del ZipFile, copy
+ # Function parameters
+ del source_folder, selected_files
+
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return __results
+ finally:
+ if "__results" in locals().keys(): del __results
+
+def un_zip_data(source_zip_file, out_data_path):
+ try:
+ # Imports
+ import copy
+ from zipfile import ZipFile
+
+ # Change Directory
+ os.chdir(out_data_path)
+
+ print(f"Un-Zipping files from {os.path.basename(source_zip_file)}")
+
+ with ZipFile(source_zip_file, mode="r") as archive:
+ for file in archive.namelist():
+ #if file.endswith(".csv"):
+ # archive.extract(file, ".")
+ archive.extract(file, ".")
+ del file
+ del archive
+
+ print(f"Done Un-Zipping files from {os.path.basename(source_zip_file)}")
+
+ __results = copy.deepcopy(out_data_path)
+ del out_data_path
+ # Declared variable
+
+ # Imports
+ del ZipFile, copy
+ # Function Parameters
+ del source_zip_file
+
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return __results
+ finally:
+ if "__results" in locals().keys(): del __results
+
+def main(in_data_path, out_data_path, selected_files):
+ try:
+ from time import gmtime, localtime, strftime, time
+ # Set a start time so that we can see how log things take
+ start_time = time()
+ print(f"{'-' * 80}")
+ print(f"Python Script: {os.path.basename(__file__)}")
+ print(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ print(f"Python Version: {sys.version}")
+ print(f"Environment: {os.path.basename(sys.exec_prefix)}")
+ print(f"{'-' * 80}\n")
+
+ # Imports
+ from dev_dismap_tools import date_code
+
+ in_project = os.path.basename(os.path.dirname(in_data_path))
+ in_version = date_code(in_project)
+ in_zip_file = rf"{in_data_path}\CSV Data {in_version}.zip"
+
+ out_project = os.path.basename(os.path.dirname(out_data_path))
+ out_version = date_code(out_project)
+ out_zip_file = rf"{out_data_path}\CSV Data {out_version}.zip"
+
+ print(in_project)
+ print(os.path.basename(in_zip_file))
+ print(os.path.basename(out_zip_file))
+
+ outZipFile = zip_data(in_data_path, selected_files, out_zip_file)
+ un_zip_data(outZipFile, out_data_path)
+
+ del in_project, in_version, in_zip_file
+ del out_project, out_version, out_zip_file, outZipFile
+ # Imports
+ del date_code
+ # Function parameters
+ del in_data_path, out_data_path, selected_files
+
+ # Elapsed time
+ end_time = time()
+ elapse_time = end_time - start_time
+
+ print(f"\n{'-' * 80}")
+ print(f"Python script: {os.path.basename(__file__)}\nCompleted: {strftime('%a %b %d %I:%M %p', localtime())}")
+ print(u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time))))
+ print(f"{'-' * 80}")
+ del elapse_time, end_time, start_time
+ del gmtime, localtime, strftime, time
+
+ except Exception:
+ pass
+ except:
+ traceback.print_exc()
+ else:
+ # While in development, leave here. For test, move to finally
+ rk = [key for key in locals().keys() if not key.startswith('__')]
+ if rk: print(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ return True
+ finally:
+ pass
+
+if __name__ == "__main__":
+ try:
+ # Append the location of this scrip to the System Path
+ #sys.path.append(os.path.dirname(__file__))
+ sys.path.append(os.path.dirname(os.path.dirname(__file__)))
+
+ # Imports
+ import dev_zip_and_unzip_csv_data
+ importlib.reload(dev_zip_and_unzip_csv_data)
+
+ base_project_folder = rf"{os.path.dirname(os.path.dirname(__file__))}"
+
+ #in_project = "July 1 2024"
+ in_project = "December 1 2024"
+ in_data_path = rf"{base_project_folder}\{in_project}\CSV Data"
+ del in_project
+
+ #out_project = "December 1 2024"
+ out_project = "June 1 2025"
+ out_data_path = rf"{base_project_folder}\{out_project}\CSV Data"
+ del out_project
+
+ selected_files = ["AI_IDW.csv", "Datasets.csv", "DisMAP_Survey_Info.csv",
+ "EBS_IDW.csv", "ENBS_IDW.csv", "field_definitions.json",
+ "GMEX_IDW.csv", "GOA_IDW.csv", "HI_IDW.csv",
+ "metadata_dictionary.json", "NBS_IDW.csv",
+ "NEUS_FAL_IDW.csv", "NEUS_SPR_IDW.csv",
+ "SEUS_FAL_IDW.csv", "SEUS_SPR_IDW.csv",
+ "SEUS_SUM_IDW.csv", "Species_Filter.csv",
+ "table_definitions.json", "WC_ANN_IDW.csv",
+ "WC_GFDL.csv", "WC_GLMME.csv", "WC_TRI_IDW.csv",
+ ]
+
+ selected_files = ";".join(selected_files)
+
+ main(in_data_path, out_data_path, selected_files)
+
+ del in_data_path, out_data_path, selected_files
+ del base_project_folder
+
+ # Imports
+ del dev_zip_and_unzip_csv_data
+
+ except:
+ traceback.print_exc()
+ else:
+ pass
+ finally:
+ pass
diff --git a/ArcGIS-Analysis-Python/Scripts/dismap_tools_dev/dismap_metadata_processing.py b/ArcGIS-Analysis-Python/Scripts/dismap_tools_dev/dismap_metadata_processing.py
new file mode 100644
index 0000000..7d51345
--- /dev/null
+++ b/ArcGIS-Analysis-Python/Scripts/dismap_tools_dev/dismap_metadata_processing.py
@@ -0,0 +1,2559 @@
+# -*- coding: utf-8 -*-
+# -------------------------------------------------------------------------------
+# Name: module1
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 03/03/2024
+# Copyright: (c) john.f.kennedy 2024
+# Licence:
+# -------------------------------------------------------------------------------
+import os, sys # built-ins first
+import traceback
+import importlib
+import inspect
+
+import arcpy # third-parties second
+
+sys.path.append(os.path.dirname(__file__))
+
+def line_info(msg):
+ f = inspect.currentframe()
+ i = inspect.getframeinfo(f.f_back)
+ return f"Script: {os.path.basename(i.filename)}\n\tNear Line: {i.lineno}\n\tFunction: {i.function}\n\tMessage: {msg}"
+
+def create_basic_template_xml_files(base_project_file="", project=""):
+ try:
+ # Import
+ from arcpy import metadata as md
+
+ import dismap
+ importlib.reload(dismap)
+ from dismap import dataset_title_dict, pretty_format_xml_file
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2)
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ base_project_folder = rf"{os.path.dirname(base_project_file)}"
+ base_project_file = rf"{base_project_folder}\DisMAP.aprx"
+ project_folder = rf"{base_project_folder}\{project}"
+ project_gdb = rf"{project_folder}\{project}.gdb"
+ metadata_folder = rf"{project_folder}\Template Metadata"
+ crfs_folder = rf"{project_folder}\CRFs"
+ scratch_folder = rf"{project_folder}\Scratch"
+
+ metadata_dictionary = dataset_title_dict(project_gdb)
+
+ workspaces = [project_gdb, crfs_folder]
+
+ for workspace in workspaces:
+
+ arcpy.env.workspace = workspace
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+
+ datasets = list()
+
+ walk = arcpy.da.Walk(workspace)
+
+ for dirpath, dirnames, filenames in walk:
+ for filename in filenames:
+ datasets.append(os.path.join(dirpath, filename))
+ del filename
+ del dirpath, dirnames, filenames
+ del walk
+
+ for dataset_path in sorted(datasets):
+ #print(dataset_path)
+ dataset_name = os.path.basename(dataset_path)
+
+ print(f"Dataset Name: {dataset_name}")
+
+ if "Datasets" == dataset_name:
+
+ print(f"\tDataset Table")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ datasets_table_template = rf"{metadata_folder}\datasets_table_template.xml"
+ dataset_md.saveAsXML(datasets_table_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(datasets_table_template)
+ del datasets_table_template
+
+ del dataset_md
+
+ elif "Species_Filter" == dataset_name:
+
+ print(f"\tSpecies Filter Table")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ species_filter_table_template = rf"{metadata_folder}\species_filter_table_template.xml"
+ dataset_md.saveAsXML(species_filter_table_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(species_filter_table_template)
+ del species_filter_table_template
+
+ del dataset_md
+
+ elif "Indicators" in dataset_name:
+
+ print(f"\tIndicators")
+
+ if dataset_name == "Indicators":
+ dataset_name = f"{dataset_name}_Table"
+ else:
+ pass
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ indicators_template = rf"{metadata_folder}\indicators_template.xml"
+ dataset_md.saveAsXML(indicators_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(indicators_template)
+ del indicators_template
+
+ del dataset_md
+
+ elif "LayerSpeciesYearImageName" in dataset_name:
+
+ print(f"\tLayer Species Year Image Name")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ layer_species_year_image_name_template = rf"{metadata_folder}\layer_species_year_image_name_template.xml"
+ dataset_md.saveAsXML(layer_species_year_image_name_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(layer_species_year_image_name_template)
+ del layer_species_year_image_name_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("Boundary"):
+
+ print(f"\tBoundary")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ boundary_template = rf"{metadata_folder}\boundary_template.xml"
+ dataset_md.saveAsXML(boundary_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(boundary_template)
+ del boundary_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("Extent_Points"):
+
+ print(f"\tExtent_Points")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ extent_points_template = rf"{metadata_folder}\extent_points_template.xml"
+ dataset_md.saveAsXML(extent_points_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(extent_points_template)
+ del extent_points_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("Fishnet"):
+
+ print(f"\tFishnet")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ fishnet_template = rf"{metadata_folder}\fishnet_template.xml"
+ dataset_md.saveAsXML(fishnet_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(fishnet_template)
+ del fishnet_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("Lat_Long"):
+
+ print(f"\tLat_Long")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ lat_long_template = rf"{metadata_folder}\lat_long_template.xml"
+ dataset_md.saveAsXML(lat_long_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(lat_long_template)
+ del lat_long_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("Region"):
+
+ print(f"\tRegion")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ region_template = rf"{metadata_folder}\region_template.xml"
+ dataset_md.saveAsXML(region_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(region_template)
+ del region_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("Sample_Locations"):
+
+ print(f"\tSample_Locations")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ sample_locations_template = rf"{metadata_folder}\sample_locations_template.xml"
+ dataset_md.saveAsXML(sample_locations_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(sample_locations_template)
+ del sample_locations_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("GRID_Points"):
+
+ print(f"\tGRID_Points")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ grid_points_template = rf"{metadata_folder}\grid_points_template.xml"
+ dataset_md.saveAsXML(grid_points_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(grid_points_template)
+ del grid_points_template
+
+ del dataset_md
+
+ elif "DisMAP_Regions" == dataset_name:
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ dismap_regions_template = rf"{metadata_folder}\dismap_regions_template.xml"
+ dataset_md.saveAsXML(dismap_regions_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(dismap_regions_template)
+ del dismap_regions_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("Bathymetry"):
+
+ print(f"\tBathymetry")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ bathymetry_template = rf"{metadata_folder}\bathymetry_template.xml"
+ dataset_md.saveAsXML(bathymetry_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(bathymetry_template)
+ del bathymetry_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("Latitude"):
+
+ print(f"\tLatitude")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ latitude_template = rf"{metadata_folder}\latitude_template.xml"
+ dataset_md.saveAsXML(latitude_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(latitude_template)
+ del latitude_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("Longitude"):
+
+ print(f"\tLongitude")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ longitude_template = rf"{metadata_folder}\longitude_template.xml"
+ dataset_md.saveAsXML(longitude_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(longitude_template)
+ del longitude_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("Raster_Mask"):
+
+ print(f"\tRaster_Mask")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ raster_mask_template = rf"{metadata_folder}\raster_mask_template.xml"
+ dataset_md.saveAsXML(raster_mask_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(raster_mask_template)
+ del raster_mask_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("Mosaic"):
+
+ print(f"\tMosaic")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ mosaic_template = rf"{metadata_folder}\mosaic_template.xml"
+ dataset_md.saveAsXML(mosaic_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(mosaic_template)
+ del mosaic_template
+
+ del dataset_md
+
+ elif dataset_name.endswith(".crf"):
+
+ print(f"\tCRF")
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ crf_template = rf"{metadata_folder}\crf_template.xml"
+ dataset_md.saveAsXML(crf_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(crf_template)
+ del crf_template
+
+ del dataset_md
+
+ else:
+ print(f"\tRegion Table")
+
+ if dataset_name.endswith("IDW"):
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[f"{dataset_name}"]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[f"{dataset_name}"]["Tags"]
+ dataset_md.summary = metadata_dictionary[f"{dataset_name}"]["Summary"]
+ dataset_md.description = metadata_dictionary[f"{dataset_name}"]["Description"]
+ dataset_md.credits = metadata_dictionary[f"{dataset_name}"]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[f"{dataset_name}"]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ idw_region_table_template = rf"{metadata_folder}\idw_region_table_template.xml"
+ dataset_md.saveAsXML(idw_region_table_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(idw_region_table_template)
+ del idw_region_table_template
+
+ del dataset_md
+
+ elif dataset_name.endswith("GLMME"):
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ del empty_md
+
+ dataset_md.title = metadata_dictionary[f"{dataset_name}"]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[f"{dataset_name}"]["Tags"]
+ dataset_md.summary = metadata_dictionary[f"{dataset_name}"]["Summary"]
+ dataset_md.description = metadata_dictionary[f"{dataset_name}"]["Description"]
+ dataset_md.credits = metadata_dictionary[f"{dataset_name}"]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[f"{dataset_name}"]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ glmme_region_table_template = rf"{metadata_folder}\glmme_region_table_template.xml"
+ dataset_md.saveAsXML(glmme_region_table_template, "REMOVE_ALL_SENSITIVE_INFO")
+ pretty_format_xml_file(glmme_region_table_template)
+ del glmme_region_table_template
+
+ del dataset_md
+
+ else:
+ pass
+
+ del dataset_name, dataset_path
+
+ del workspace
+
+
+ del datasets
+
+ # Declared Variables set in function
+ del project_gdb, base_project_folder, metadata_folder
+ del project_folder, scratch_folder, crfs_folder
+ del metadata_dictionary, workspaces
+
+ # Imports
+ del dismap, dataset_title_dict, pretty_format_xml_file
+ del md
+
+ # Function Parameters
+ del base_project_file, project
+
+ except KeyboardInterrupt:
+ raise SystemExit
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(str(traceback.print_exc()) + arcpy.GetMessages())
+ except arcpy.ExecuteError:
+ arcpy.AddError(str(traceback.print_exc()) + arcpy.GetMessages())
+ except Exception:
+ traceback.print_exc()
+ except:
+ traceback.print_exc()
+ else:
+ try:
+ leave_out_keys = ["leave_out_keys", "results"]
+ remaining_keys = [key for key in locals().keys() if not key.startswith('__') and key not in leave_out_keys]
+ if remaining_keys:
+ arcpy.AddWarning(f"Remaining Keys in '{inspect.stack()[0][3]}': ##--> '{', '.join(remaining_keys)}' <--## Line Number: {traceback.extract_stack()[-1].lineno}")
+ del leave_out_keys, remaining_keys
+ return results if "results" in locals().keys() else ["NOTE!! The 'results' variable not yet set!!"]
+ except:
+ raise Exception(traceback.print_exc())
+ finally:
+ if "results" in locals().keys(): del results
+
+def import_basic_template_xml_files(base_project_file="", project=""):
+ try:
+ # Import
+ from arcpy import metadata as md
+
+ import dismap
+ importlib.reload(dismap)
+ from dismap import dataset_title_dict, pretty_format_xml_file, unique_years
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2)
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ base_project_folder = rf"{os.path.dirname(base_project_file)}"
+ project_folder = rf"{base_project_folder}\{project}"
+ project_gdb = rf"{project_folder}\{project}.gdb"
+ current_md_folder = rf"{project_folder}\Current Metadata"
+ inport_md_folder = rf"{project_folder}\InPort Metadata"
+ crfs_folder = rf"{project_folder}\CRFs"
+ scratch_folder = rf"{project_folder}\Scratch"
+
+ #print("Creating the Metadata Dictionary. Please wait!!")
+ metadata_dictionary = dataset_title_dict(project_gdb)
+ #print("Creating the Metadata Dictionary. Completed")
+
+ workspaces = [project_gdb, crfs_folder]
+ #workspaces = [crfs_folder]
+
+ for workspace in workspaces:
+
+ arcpy.env.workspace = workspace
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+
+ datasets = list()
+
+ walk = arcpy.da.Walk(workspace)
+
+ for dirpath, dirnames, filenames in walk:
+ for filename in filenames:
+ datasets.append(os.path.join(dirpath, filename))
+ del filename
+ del dirpath, dirnames, filenames
+ del walk
+
+ for dataset_path in sorted(datasets):
+ #print(dataset_path)
+ dataset_name = os.path.basename(dataset_path)
+
+ print(f"Dataset Name: {dataset_name}")
+
+ if "Datasets" == dataset_name:
+
+ print(f"\tDataset Table")
+
+ datasets_table_template = rf"{current_md_folder}\Table\datasets_table_template.xml"
+ template_md = md.Metadata(datasets_table_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ #dataset_md.importMetadata(datasets_table_template)
+ dataset_md.save()
+ #dataset_md.synchronize("SELECTIVE")
+
+ del empty_md, template_md, datasets_table_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Table\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif "Species_Filter" == dataset_name:
+
+ print(f"\tSpecies Filter Table")
+
+ species_filter_table_template = rf"{current_md_folder}\Table\species_filter_table_template.xml"
+ template_md = md.Metadata(species_filter_table_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, species_filter_table_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Table\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif "Indicators" in dataset_name:
+
+ print(f"\tIndicators")
+
+ if dataset_name == "Indicators":
+ indicators_template = rf"{current_md_folder}\Table\indicators_template.xml"
+ else:
+ indicators_template = rf"{current_md_folder}\Table\region_indicators_template.xml"
+
+ template_md = md.Metadata(indicators_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, indicators_template
+
+ # Max-Min Year range table
+ years_md = unique_years(dataset_path)
+ _tags = f", {min(years_md)} to {max(years_md)}"
+ del years_md
+
+ #print(metadata_dictionary[dataset_name]["Tags"])
+ #print(_tags)
+
+ if dataset_name == "Indicators":
+ dataset_name = f"{dataset_name}_Table"
+ else:
+ pass
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"] + _tags
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Table\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md, _tags
+
+ elif "LayerSpeciesYearImageName" in dataset_name:
+
+ print(f"\tLayer Species Year Image Name")
+
+ layer_species_year_image_name_template = rf"{current_md_folder}\Table\layer_species_year_image_name_template.xml"
+ template_md = md.Metadata(layer_species_year_image_name_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, layer_species_year_image_name_template
+
+ # Max-Min Year range table
+ years_md = unique_years(dataset_path)
+ _tags = f", {min(years_md)} to {max(years_md)}"
+ del years_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"] + _tags
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Table\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md, _tags
+
+ elif dataset_name.endswith("Boundary"):
+
+ print(f"\tBoundary")
+
+ boundary_template = rf"{current_md_folder}\Boundary\boundary_template.xml"
+ template_md = md.Metadata(boundary_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, boundary_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Boundary\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Boundary\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif dataset_name.endswith("Extent_Points"):
+
+ print(f"\tExtent_Points")
+
+ extent_points_template = rf"{current_md_folder}\Extent_Points\extent_points_template.xml"
+ template_md = md.Metadata(extent_points_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, extent_points_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Extent_Points\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Extent_Points\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif dataset_name.endswith("Fishnet"):
+
+ print(f"\tFishnet")
+
+ fishnet_template = rf"{current_md_folder}\Fishnet\fishnet_template.xml"
+ template_md = md.Metadata(fishnet_template)
+
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, fishnet_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Fishnet\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Fishnet\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif dataset_name.endswith("Lat_Long"):
+
+ print(f"\tLat_Long")
+
+ lat_long_template = rf"{current_md_folder}\Lat_Long\lat_long_template.xml"
+ template_md = md.Metadata(lat_long_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, lat_long_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Lat_Long\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Lat_Long\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif dataset_name.endswith("Region"):
+
+ print(f"\tRegion")
+
+ region_template = rf"{current_md_folder}\Region\region_template.xml"
+ template_md = md.Metadata(region_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, region_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Region\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Region\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif dataset_name.endswith("Sample_Locations"):
+
+ print(f"\tSample_Locations")
+
+ sample_locations_template = rf"{current_md_folder}\Sample_Locations\sample_locations_template.xml"
+ template_md = md.Metadata(sample_locations_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, sample_locations_template
+
+ # Max-Min Year range table
+ years_md = unique_years(dataset_path)
+ _tags = f", {min(years_md)} to {max(years_md)}"
+ del years_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"] + _tags
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Sample_Locations\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Sample_Locations\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md, _tags
+
+ elif dataset_name.endswith("GRID_Points"):
+
+ print(f"\tGRID_Points")
+
+ grid_points_template = rf"{current_md_folder}\GRID_Points\grid_points_template.xml"
+ template_md = md.Metadata(grid_points_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, grid_points_template
+
+ # Max-Min Year range table
+ years_md = unique_years(dataset_path)
+ _tags = f", {min(years_md)} to {max(years_md)}"
+ del years_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"] + _tags
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\GRID_Points\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\GRID_Points\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md, _tags
+
+ elif "DisMAP_Regions" == dataset_name:
+
+ print(f"\tDisMAP_Regions")
+
+ dismap_regions_template = rf"{current_md_folder}\Region\dismap_regions_template.xml"
+ template_md = md.Metadata(dismap_regions_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, dismap_regions_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Region\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Region\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif dataset_name.endswith("Bathymetry"):
+
+ print(f"\tBathymetry")
+
+ bathymetry_template = rf"{current_md_folder}\Bathymetry\bathymetry_template.xml"
+ template_md = md.Metadata(bathymetry_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, bathymetry_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Bathymetry\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Bathymetry\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif dataset_name.endswith("Latitude"):
+
+ print(f"\tLatitude")
+
+ latitude_template = rf"{current_md_folder}\Latitude\latitude_template.xml"
+ template_md = md.Metadata(latitude_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, latitude_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Latitude\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Latitude\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif dataset_name.endswith("Longitude"):
+
+ print(f"\tLongitude")
+
+ longitude_template = rf"{current_md_folder}\Longitude\longitude_template.xml"
+ template_md = md.Metadata(longitude_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, longitude_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Longitude\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Longitude\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif dataset_name.endswith("Raster_Mask"):
+
+ print(f"\tRaster_Mask")
+
+ raster_mask_template = rf"{current_md_folder}\Raster_Mask\raster_mask_template.xml"
+ template_md = md.Metadata(raster_mask_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, raster_mask_template
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Raster_Mask\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Raster_Mask\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md
+
+ elif dataset_name.endswith("Mosaic"):
+
+ print(f"\tMosaic")
+
+ mosaic_template = rf"{current_md_folder}\Mosaic\mosaic_template.xml"
+ template_md = md.Metadata(mosaic_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, mosaic_template
+
+ # Max-Min Year range table
+ years_md = unique_years(dataset_path)
+ _tags = f", {min(years_md)} to {max(years_md)}"
+ del years_md
+
+ dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name]["Tags"] + _tags
+ dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Mosaic\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Mosaic\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md, _tags
+
+ elif dataset_name.endswith(".crf"):
+
+ print(f"\tCRF")
+ #print(dataset_name)
+ #print(dataset_path)
+ #dataset_path = dataset_path.replace(crfs_folder, project_gdb).replace(".crf", "_Mosaic")
+ #print(dataset_path)
+
+ crf_template = rf"{current_md_folder}\CRF\crf_template.xml"
+ template_md = md.Metadata(crf_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, crf_template
+
+ # Max-Min Year range table
+ years_md = unique_years(dataset_path.replace(crfs_folder, project_gdb).replace(".crf", "_Mosaic"))
+ _tags = f", {min(years_md)} to {max(years_md)}"
+ del years_md
+
+ dataset_md.title = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Tags"] + _tags
+ dataset_md.summary = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Summary"]
+ dataset_md.description = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Description"]
+ dataset_md.credits = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\CRF\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\CRF\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md, _tags
+
+ else:
+ print(f"\tRegion Table")
+
+ if dataset_name.endswith("IDW"):
+
+ idw_region_table_template = rf"{current_md_folder}\Table\idw_region_table_template.xml"
+ template_md = md.Metadata(idw_region_table_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, idw_region_table_template
+
+ # Max-Min Year range table
+ years_md = unique_years(dataset_path)
+ _tags = f", {min(years_md)} to {max(years_md)}"
+ del years_md
+
+ dataset_md.title = metadata_dictionary[f"{dataset_name}"]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[f"{dataset_name}"]["Tags"] + _tags
+ dataset_md.summary = metadata_dictionary[f"{dataset_name}"]["Summary"]
+ dataset_md.description = metadata_dictionary[f"{dataset_name}"]["Description"]
+ dataset_md.credits = metadata_dictionary[f"{dataset_name}"]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[f"{dataset_name}"]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Table\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md, _tags
+
+ elif dataset_name.endswith("GLMME"):
+
+ glmme_region_table_template = rf"{current_md_folder}\Table\glmme_region_table_template.xml"
+ template_md = md.Metadata(glmme_region_table_template)
+
+ dataset_md = md.Metadata(dataset_path)
+ empty_md = md.Metadata()
+ dataset_md.copy(empty_md)
+ dataset_md.save()
+ dataset_md.copy(template_md)
+ dataset_md.save()
+ del empty_md, template_md, glmme_region_table_template
+
+ # Max-Min Year range table
+ years_md = unique_years(dataset_path)
+ _tags = f", {min(years_md)} to {max(years_md)}"
+ del years_md
+
+ dataset_md.title = metadata_dictionary[f"{dataset_name}"]["Dataset Service Title"]
+ dataset_md.tags = metadata_dictionary[f"{dataset_name}"]["Tags"] + _tags
+ dataset_md.summary = metadata_dictionary[f"{dataset_name}"]["Summary"]
+ dataset_md.description = metadata_dictionary[f"{dataset_name}"]["Description"]
+ dataset_md.credits = metadata_dictionary[f"{dataset_name}"]["Credits"]
+ dataset_md.accessConstraints = metadata_dictionary[f"{dataset_name}"]["Access Constraints"]
+ dataset_md.save()
+
+ dataset_md.synchronize("ALWAYS")
+
+ out_xml = rf"{current_md_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ target_file_path = rf"{inport_md_folder}\Table\{dataset_name}.xml"
+ custom_xslt_path = rf"{inport_md_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ pretty_format_xml_file(target_file_path)
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_md, _tags
+
+ else:
+ pass
+
+ del dataset_name, dataset_path
+
+ del workspace
+
+ del datasets
+
+ # Declared Variables set in function
+ del project_gdb, base_project_folder, current_md_folder, inport_md_folder
+ del project_folder, scratch_folder, crfs_folder
+ del metadata_dictionary, workspaces
+
+ # Imports
+ del dismap, dataset_title_dict, pretty_format_xml_file, unique_years
+ del md
+
+ # Function Parameters
+ del base_project_file, project
+
+ except KeyboardInterrupt:
+ raise SystemExit
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ except Exception as e:
+ print(e)
+ except:
+ traceback.print_exc()
+ else:
+ try:
+ leave_out_keys = ["leave_out_keys", "results"]
+ remaining_keys = [key for key in locals().keys() if not key.startswith('__') and key not in leave_out_keys]
+ if remaining_keys:
+ arcpy.AddWarning(f"Remaining Keys in '{inspect.stack()[0][3]}': ##--> '{', '.join(remaining_keys)}' <--## Line Number: {traceback.extract_stack()[-1].lineno}")
+ del leave_out_keys, remaining_keys
+ return results if "results" in locals().keys() else ["NOTE!! The 'results' variable not yet set!!"]
+ except:
+ raise Exception(traceback.print_exc())
+ finally:
+ if "results" in locals().keys(): del results
+
+def create_thumbnails(base_project_file="", project=""):
+ try:
+ # Import
+ from arcpy import metadata as md
+
+ import dismap
+ importlib.reload(dismap)
+ from dismap import pretty_format_xml_file
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2)
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ base_project_folder = rf"{os.path.dirname(base_project_file)}"
+ base_project_file = rf"{base_project_folder}\DisMAP.aprx"
+ project_folder = rf"{base_project_folder}\{project}"
+ project_gdb = rf"{project_folder}\{project}.gdb"
+ metadata_folder = rf"{project_folder}\Export Metadata"
+ crfs_folder = rf"{project_folder}\CRFs"
+ scratch_folder = rf"{project_folder}\Scratch"
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+
+ aprx = arcpy.mp.ArcGISProject(base_project_file)
+ home_folder = aprx.homeFolder
+
+ workspaces = [project_gdb, crfs_folder]
+
+ for workspace in workspaces:
+
+ arcpy.env.workspace = workspace
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+
+ datasets = list()
+
+ walk = arcpy.da.Walk(workspace)
+
+ for dirpath, dirnames, filenames in walk:
+ for filename in filenames:
+ datasets.append(os.path.join(dirpath, filename))
+ del filename
+ del dirpath, dirnames, filenames
+ del walk
+
+ for dataset_path in sorted(datasets):
+ #print(dataset_path)
+ dataset_name = os.path.basename(dataset_path)
+
+ print(f"Dataset Name: {dataset_name}")
+
+ if "Datasets" == dataset_name:
+
+ print(f"\tDataset Table")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif "Species_Filter" == dataset_name:
+
+ print(f"\tSpecies Filter Table")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif "Indicators" in dataset_name:
+
+ print(f"\tIndicators")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif "LayerSpeciesYearImageName" in dataset_name:
+
+ print(f"\tLayer Species Year Image Name")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("Boundary"):
+
+ print(f"\tBoundary")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Boundary\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("Extent_Points"):
+
+ print(f"\tExtent_Points")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Extent_Points\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("Fishnet"):
+
+ print(f"\tFishnet")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Fishnet\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("Lat_Long"):
+
+ print(f"\tLat_Long")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Lat_Long\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("Region"):
+
+ print(f"\tRegion")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Region\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("Sample_Locations"):
+
+ print(f"\tSample_Locations")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Sample_Locations\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("GRID_Points"):
+
+ print(f"\tGRID_Points")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\GRID_Points\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif "DisMAP_Regions" == dataset_name:
+
+ print(f"\tDisMAP_Regions")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Region\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("Bathymetry"):
+
+ print(f"\tBathymetry")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Bathymetry\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("Latitude"):
+
+ print(f"\tLatitude")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Latitude\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("Longitude"):
+
+ print(f"\tLongitude")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Longitude\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("Raster_Mask"):
+
+ print(f"\tRaster_Mask")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Raster_Mask\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("Mosaic"):
+
+ print(f"\tMosaic")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Mosaic\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith(".crf"):
+
+ print(f"\tCRF")
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\CRF\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ else:
+ pass
+ print(f"\tRegion Table")
+
+ if dataset_name.endswith("IDW"):
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ elif dataset_name.endswith("GLMME"):
+
+ dataset_md = md.Metadata(dataset_path)
+
+ out_xml = rf"{metadata_folder}\Table\{dataset_name}.xml"
+ dataset_md.saveAsXML(out_xml)
+ pretty_format_xml_file(out_xml)
+ del out_xml
+
+ del dataset_md
+
+ else:
+ pass
+
+ del dataset_name, dataset_path
+
+ del workspace, datasets
+
+ del workspaces
+
+ # Declared Variables set in function for aprx
+ del home_folder
+ # Save aprx one more time and then delete
+ aprx.save()
+ del aprx
+
+ # Declared Variables set in function
+ del project_gdb, base_project_folder, metadata_folder, crfs_folder
+ del project_folder, scratch_folder
+
+ # Imports
+ del dismap, pretty_format_xml_file
+ del md
+
+ # Function Parameters
+ del base_project_file, project
+
+ except KeyboardInterrupt:
+ raise SystemExit
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(str(traceback.print_exc()) + arcpy.GetMessages())
+ except arcpy.ExecuteError:
+ arcpy.AddError(str(traceback.print_exc()) + arcpy.GetMessages())
+ except Exception:
+ traceback.print_exc()
+ except:
+ traceback.print_exc()
+ else:
+ try:
+ leave_out_keys = ["leave_out_keys", "results"]
+ remaining_keys = [key for key in locals().keys() if not key.startswith('__') and key not in leave_out_keys]
+ if remaining_keys:
+ arcpy.AddWarning(f"Remaining Keys in '{inspect.stack()[0][3]}': ##--> '{', '.join(remaining_keys)}' <--## Line Number: {traceback.extract_stack()[-1].lineno}")
+ del leave_out_keys, remaining_keys
+ return results if "results" in locals().keys() else ["NOTE!! The 'results' variable not yet set!!"]
+ except:
+ raise Exception(traceback.print_exc())
+ finally:
+ if "results" in locals().keys(): del results
+
+def export_to_inport_xml_files(base_project_file="", project=""):
+ try:
+ if not base_project_file or not project: raise SystemExit("parameters are missing")
+
+ # Import
+ from arcpy import metadata as md
+
+ import dismap
+ importlib.reload(dismap)
+ from dismap import pretty_format_xml_file
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2)
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ base_project_folder = rf"{os.path.dirname(base_project_file)}"
+ project_folder = rf"{base_project_folder}\{project}"
+ project_gdb = rf"{project_folder}\{project}.gdb"
+ metadata_folder = rf"{project_folder}\InPort Metadata"
+ crfs_folder = rf"{project_folder}\CRFs"
+ scratch_folder = rf"{project_folder}\Scratch"
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+
+ datasets = [rf"{project_gdb}\Species_Filter", rf"{project_gdb}\Indicators", os.path.join(project_gdb, "DisMAP_Regions"), rf"{project_gdb}\GMEX_IDW_Sample_Locations", rf"{project_gdb}\GMEX_IDW_Mosaic", rf"{crfs_folder}\GMEX_IDW.crf"]
+
+ for dataset_path in sorted(datasets):
+ print(dataset_path)
+
+ dataset_name = os.path.basename(dataset_path)
+
+ print(f"Dataset Name: {dataset_name}")
+
+ target_file_path = rf"{metadata_folder}\{dataset_name}.xml"
+ custom_xslt_path = rf"{metadata_folder}\ArcGIS2InPort.xsl"
+
+ dataset_md = md.Metadata(dataset_path)
+ dataset_md.saveAsUsingCustomXSLT(target_file_path, custom_xslt_path)
+ del dataset_md
+
+ try:
+ pretty_format_xml_file(target_file_path)
+ except Exception:
+ raise Exception
+
+ del target_file_path, custom_xslt_path
+
+ del dataset_name, dataset_path
+
+ del datasets
+
+ # Declared Variables set in function
+ del project_gdb, base_project_folder, metadata_folder
+ del project_folder, scratch_folder, crfs_folder
+
+ # Imports
+ del dismap, pretty_format_xml_file
+ del md
+
+ # Function Parameters
+ del base_project_file, project
+
+ except KeyboardInterrupt:
+ raise SystemExit
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(str(traceback.print_exc()) + arcpy.GetMessages())
+ raise SystemExit
+ except arcpy.ExecuteError:
+ arcpy.AddError(str(traceback.print_exc()) + arcpy.GetMessages())
+ raise SystemExit
+ except SystemExit as se:
+ arcpy.AddError(se)
+ raise SystemExit
+ except:
+ traceback.print_exc()
+ else:
+ try:
+ leave_out_keys = ["leave_out_keys", "results"]
+ remaining_keys = [key for key in locals().keys() if not key.startswith('__') and key not in leave_out_keys]
+ if remaining_keys:
+ arcpy.AddWarning(f"Remaining Keys in '{inspect.stack()[0][3]}': ##--> '{', '.join(remaining_keys)}' <--## Line Number: {traceback.extract_stack()[-1].lineno}")
+ del leave_out_keys, remaining_keys
+ return results if "results" in locals().keys() else ["NOTE!! The 'results' variable not yet set!!"]
+ except:
+ raise Exception(traceback.print_exc())
+ finally:
+ if "results" in locals().keys(): del results
+
+def create_maps(base_project_file="", project="", dataset=""):
+ try:
+ # Import
+ from arcpy import metadata as md
+
+ import dismap
+ importlib.reload(dismap)
+ from dismap import pretty_format_xml_file
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(2)
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ base_project_folder = rf"{os.path.dirname(base_project_file)}"
+ base_project_file = rf"{base_project_folder}\DisMAP.aprx"
+ project_folder = rf"{base_project_folder}\{project}"
+ project_gdb = rf"{project_folder}\{project}.gdb"
+ metadata_folder = rf"{project_folder}\Export Metadata"
+ crfs_folder = rf"{project_folder}\CRFs"
+ scratch_folder = rf"{project_folder}\Scratch"
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
+
+ aprx = arcpy.mp.ArcGISProject(base_project_file)
+
+ dataset_name = os.path.basename(dataset)
+
+ print(f"Dataset Name: {dataset_name}")
+
+ if dataset_name not in [cm.name for cm in aprx.listMaps()]:
+ print(f"Creating Map: {dataset_name}")
+ aprx.createMap(f"{dataset_name}", "Map")
+ aprx.save()
+ else:
+ pass
+
+ current_map = aprx.listMaps(f"{dataset_name}")[0]
+ print(f"Current Map: {current_map.name}")
+
+ if dataset_name not in [lyr.name for lyr in current_map.listLayers(f"{dataset_name}")]:
+ print(f"Adding {dataset_name} to Map")
+
+ map_layer = arcpy.management.MakeFeatureLayer(dataset, f"{dataset_name}")
+
+ #arcpy.management.Delete(rf"{project_folder}\Layers\{dataset_name}.lyrx")
+ #os.remove(rf"{project_folder}\Layers\{dataset_name}.lyrx")
+
+ map_layer_file = arcpy.management.SaveToLayerFile(map_layer, rf"{project_folder}\Layers\{dataset_name}.lyrx")
+ del map_layer_file
+
+ map_layer_file = arcpy.mp.LayerFile(rf"{project_folder}\Layers\{dataset_name}.lyrx")
+
+ arcpy.management.Delete(map_layer)
+ del map_layer
+
+ current_map.addLayer(map_layer_file)
+ del map_layer_file
+
+ aprx.save()
+ else:
+ pass
+
+ #aprx_basemaps = aprx.listBasemaps()
+ #basemap = 'GEBCO Basemap/Contours (NOAA NCEI Visualization)'
+ basemap = "Terrain with Labels"
+
+ current_map.addBasemap(basemap)
+ del basemap
+
+ # Set Reference Scale
+ current_map.referenceScale = 50000000
+
+ # Clear Selection
+ current_map.clearSelection()
+
+ current_map_cim = current_map.getDefinition('V3')
+ current_map_cim.enableWraparound = True
+ current_map.setDefinition(current_map_cim)
+
+ # Return the layer's CIM definition
+ cim_lyr = lyr.getDefinition('V3')
+
+ # Modify the color, width and dash template for the SolidStroke layer
+ symLvl1 = cim_lyr.renderer.symbol.symbol.symbolLayers[0]
+ symLvl1.color.values = [0, 0, 0, 100]
+ symLvl1.width = 1
+
+ # Push the changes back to the layer object
+ lyr.setDefinition(cim_lyr)
+ del symLvl1, cim_lyr
+
+ aprx.save()
+
+ height = arcpy.Describe(dataset).extent.YMax - arcpy.Describe(dataset).extent.YMin
+ width = arcpy.Describe(dataset).extent.XMax - arcpy.Describe(dataset).extent.XMin
+
+ # map_width, map_height
+ map_width, map_height = 8.5, 11
+
+ if height > width:
+ page_height = map_height; page_width = map_width
+ elif height < width:
+ page_height = map_width; page_width = map_height
+ else:
+ page_width = map_width; page_height = map_height
+
+ del map_width, map_height
+ del height, width
+
+ if dataset_name not in [cl.name for cl in aprx.listLayouts()]:
+ print(f"Creating Layout: {dataset_name}")
+ aprx.createLayout(page_width, page_height, "INCH", f"{dataset_name}")
+ aprx.save()
+ else:
+ print(f"Layout: {dataset_name} exists")
+
+ #Set the default map camera to the extent of the park boundary before opening the new view
+ #default camera only affects newly opened views
+ lyr = current_map.listLayers(f"{dataset_name}")[-1]
+
+ #
+ arcpy.management.SelectLayerByAttribute(lyr, 'NEW_SELECTION', "DatasetCode in ('ENBS', 'HI', 'NEUS_SPR')")
+
+ mv = current_map.openView()
+ mv.panToExtent(mv.getLayerExtent(lyr, True, True))
+ mv.zoomToAllLayers()
+ del mv
+
+ arcpy.management.SelectLayerByAttribute(lyr, 'CLEAR_SELECTION')
+
+ av = aprx.activeView
+ av.exportToPNG(rf"{project_folder}\Layers\{dataset_name}.png", width=288, height=192, resolution = 96, color_mode="24-BIT_TRUE_COLOR", embed_color_profile=True)
+ av.exportToJPEG(rf"{project_folder}\Layers\{dataset_name}.jpg", width=288, height=192, resolution = 96, jpeg_color_mode="24-BIT_TRUE_COLOR", embed_color_profile=True)
+ del av
+
+ #print(current_map.referenceScale)
+
+ #export the newly opened active view to PDF, then delete the new map
+ #mv = aprx.activeView
+ #mv.exportToPDF(r"C:\Temp\RangerStations.pdf", width=700, height=500, resolution=96)
+ #aprx.deleteItem(current_map)
+
+ #mv = aprx.activeView
+ #mv = current_map.defaultView
+ #mv.zoomToAllLayers()
+ #print(mv.camera.getExtent())
+ #arcpy.management.Delete(rf"{project_folder}\Layers\{dataset_name}.png")
+ #arcpy.management.Delete(rf"{project_folder}\Layers\{dataset_name}.jpg")
+
+ #os.remove(rf"{project_folder}\Layers\{dataset_name}.png")
+ #os.remove(rf"{project_folder}\Layers\{dataset_name}.jpg")
+
+
+ #mv.exportToPNG(rf"{project_folder}\Layers\{dataset_name}.png", width=288, height=192, resolution = 96, color_mode="24-BIT_TRUE_COLOR", embed_color_profile=True)
+ #mv.exportToJPEG(rf"{project_folder}\Layers\{dataset_name}.jpg", width=288, height=192, resolution = 96, jpeg_color_mode="24-BIT_TRUE_COLOR", embed_color_profile=True)
+ #del mv
+
+ #Export the resulting imported layout and changes to JPEG
+ #print(f"Exporting '{current_layout.name}'")
+ #current_map.exportToJPEG(rf"{project_folder}\Layouts\{current_layout.name}.jpg", page_width, page_height)
+ #current_map.exportToPNG(rf"{project_folder}\Layouts\{current_layout.name}.png", page_width, page_height)
+
+ #fc_md = md.Metadata(dataset)
+ #fc_md.thumbnailUri = rf"{project_folder}\Layouts\{dataset_name}.png"
+ #fc_md.thumbnailUri = rf"{project_folder}\Layouts\{dataset_name}.jpg"
+ #fc_md.save()
+ #del fc_md
+
+ aprx.save()
+
+
+ # # from arcpy import metadata as md
+ # #
+ # # fc_md = md.Metadata(dataset)
+ # # fc_md.thumbnailUri = rf"{project_folder}\Layers\{dataset_name}.png"
+ # # fc_md.save()
+ # # del fc_md
+ # # del md
+
+## aprx.save()
+##
+## current_layout = [cl for cl in aprx.listLayouts() if cl.name == dataset_name][0]
+## print(f"Current Layout: {current_layout.name}")
+##
+## current_layout.openView()
+##
+## # Remove all map frames
+## for mf in current_layout.listElements("MapFrame_Element"): current_layout.deleteElement(mf); del mf
+##
+## # print(f'Layout Name: {current_layout.name}')
+## # print(f' Width x height: {current_layout.pageWidth} x {current_layout.pageHeight} units are {current_layout.pageUnits}')
+## # print(f' MapFrame count: {str(len(current_layout.listElements("MapFrame_Element")))}')
+## # for mf in current_layout.listElements("MapFrame_Element"):
+## # if len(current_layout.listElements("MapFrame_Element")) > 0:
+## # print(f' MapFrame name: {mf.name}')
+## # print(f' Total element count: {str(len(current_layout.listElements()))} \n')
+##
+##
+## print(f"Create a new map frame using a point geometry")
+## #Create a new map frame using a point geometry
+## #mf1 = current_layout.createMapFrame(arcpy.Point(0.01,0.01), current_map, 'New MF - Point')
+## mf1 = current_layout.createMapFrame(arcpy.Point(0.0,0.0), current_map, 'New MF - Point')
+## #mf1.elementWidth = 10
+## #mf1.elementHeight = 7.5
+## #mf1.elementWidth = page_width - 0.01
+## #mf1.elementHeight = page_height - 0.01
+## mf1.elementWidth = page_width
+## mf1.elementHeight = page_height
+
+## lyr = current_map.listLayers(f"{dataset_name}")[0]
+##
+## #Zoom to ALL selected features and export to PDF
+## #arcpy.SelectLayerByAttribute_management(lyr, 'NEW_SELECTION')
+## #mf1.zoomToAllLayers(True)
+## #arcpy.SelectLayerByAttribute_management(lyr, 'CLEAR_SELECTION')
+##
+## #Set the map frame extent to the extent of a layer
+## #mf1.camera.setExtent(mf1.getLayerExtent(lyr, False, True))
+## #mf1.camera.scale = mf1.camera.scale * 1.1 #add a slight buffer
+##
+## del lyr
+
+## print(f"Create a new bookmark set to the map frame's default extent")
+## #Create a new bookmark set to the map frame's default extent
+## bkmk = mf1.createBookmark('Default Extent', "The map's default extent")
+## bkmk.updateThumbnail()
+## del mf1
+## del bkmk
+
+ # Create point text element using a system style item
+ # txtStyleItem = aprx.listStyleItems('ArcGIS 2D', 'TEXT', 'Title (Serif)')[0]
+ # ptTxt = aprx.createTextElement(current_layout, arcpy.Point(5.5, 4.25), 'POINT', f'{dataset_name}', 10, style_item=txtStyleItem)
+ # del txtStyleItem
+
+ # Change the anchor position and reposition the text to center
+ # ptTxt.setAnchor('Center_Point')
+ # ptTxt.elementPositionX = page_width / 2.0
+ # ptTxt.elementPositionY = page_height - 0.25
+ # del ptTxt
+
+ # print(f"Using CIM to update border")
+ # current_layout_cim = current_layout.getDefinition('V3')
+ # for elm in current_layout_cim.elements:
+ # if type(elm).__name__ == 'CIMMapFrame':
+ # if elm.graphicFrame.borderSymbol.symbol.symbolLayers:
+ # sym = elm.graphicFrame.borderSymbol.symbol.symbolLayers[0]
+ # sym.width = 5
+ # sym.color.values = [255, 0, 0, 100]
+ # else:
+ # arcpy.AddWarning(elm.name + ' has NO symbol layers')
+ # current_layout.setDefinition(current_layout_cim)
+ # del current_layout_cim, elm, sym
+
+## ExportLayout = True
+## if ExportLayout:
+## #Export the resulting imported layout and changes to JPEG
+## print(f"Exporting '{current_layout.name}'")
+## current_layout.exportToJPEG(rf"{project_folder}\Layouts\{current_layout.name}.jpg")
+## current_layout.exportToPNG(rf"{project_folder}\Layouts\{current_layout.name}.png")
+## del ExportLayout
+
+## #Export the resulting imported layout and changes to JPEG
+## print(f"Exporting '{current_layout.name}'")
+## current_map.exportToJPEG(rf"{project_folder}\Layouts\{current_layout.name}.jpg", page_width, page_height)
+## current_map.exportToPNG(rf"{project_folder}\Layouts\{current_layout.name}.png", page_width, page_height)
+##
+## fc_md = md.Metadata(dataset)
+## fc_md.thumbnailUri = rf"{project_folder}\Layouts\{current_layout.name}.png"
+## #fc_md.thumbnailUri = rf"{project_folder}\Layouts\{current_layout.name}.jpg"
+## fc_md.save()
+## del fc_md
+##
+## aprx.save()
+
+ # aprx.deleteItem(current_map)
+ #aprx.deleteItem(current_layout)
+
+ del current_map
+ #, current_layout
+ #del page_width, page_height
+ del dataset_name, dataset
+
+ aprx.save()
+
+ print(f"\nCurrent Maps & Layouts")
+
+ current_maps = aprx.listMaps()
+ #current_layouts = aprx.listLayouts()
+
+ if current_maps:
+ print(f"\nCurrent Maps\n")
+ for current_map in current_maps:
+ print(f"\tProject Map: {current_map.name}")
+ del current_map
+ else:
+ arcpy.AddWarning("No maps in Project")
+
+## if current_layouts:
+## print(f"\nCurrent Layouts\n")
+## for current_layout in current_layouts:
+## print(f"\tProject Layout: {current_layout.name}")
+## del current_layout
+## else:
+## arcpy.AddWarning("No layouts in Project")
+
+ #del current_layouts
+ del current_maps
+
+ # Declared Variables set in function for aprx
+
+ # Save aprx one more time and then delete
+ aprx.save()
+ del aprx
+
+ # Declared Variables set in function
+ del project_gdb, base_project_folder, metadata_folder, crfs_folder
+ del project_folder, scratch_folder
+
+ # Imports
+ del dismap, pretty_format_xml_file
+ del md
+
+ # Function Parameters
+ del base_project_file, project
+
+ except KeyboardInterrupt:
+ raise SystemExit
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(str(traceback.print_exc()) + arcpy.GetMessages())
+ raise Exception
+ except arcpy.ExecuteError:
+ arcpy.AddError(str(traceback.print_exc()) + arcpy.GetMessages())
+ raise Exception
+ except Exception:
+ traceback.print_exc()
+ except:
+ traceback.print_exc()
+ else:
+ try:
+ leave_out_keys = ["leave_out_keys", "results"]
+ remaining_keys = [key for key in locals().keys() if not key.startswith('__') and key not in leave_out_keys]
+ if remaining_keys:
+ arcpy.AddWarning(f"Remaining Keys in '{inspect.stack()[0][3]}': ##--> '{', '.join(remaining_keys)}' <--## Line Number: {traceback.extract_stack()[-1].lineno}")
+ del leave_out_keys, remaining_keys
+ return results if "results" in locals().keys() else ["NOTE!! The 'results' variable not yet set!!"]
+ except:
+ raise Exception(traceback.print_exc())
+ finally:
+ if "results" in locals().keys(): del results
+
+def main(project=""):
+ try:
+ # Imports
+ import dismap
+ importlib.reload(dismap)
+
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ base_project_folder = os.path.dirname(os.path.dirname(__file__))
+ base_project_file = rf"{base_project_folder}\DisMAP.aprx"
+ project_gdb = rf"{base_project_folder}\{project}\{project}.gdb"
+
+ # Test if passed workspace exists, if not raise SystemExit
+ if not arcpy.Exists(base_project_file):
+ raise SystemExit(line_info(f"{os.path.basename(base_project_file)} is missing!!"))
+
+ # Test if passed workspace exists, if not raise SystemExit
+ if not arcpy.Exists(project_gdb):
+ raise SystemExit(line_info(f"{os.path.basename(project_gdb)} is missing!!"))
+
+ del base_project_folder
+
+ Backup = False
+ if Backup:
+ dismap.backup_gdb(project_gdb)
+ del Backup
+
+ # Imports
+ del dismap
+ # Function parameters
+
+ results = []
+
+ try:
+
+ CreateBasicTemplateXMLFiles = False
+ if CreateBasicTemplateXMLFiles:
+ result = create_basic_template_xml_files(base_project_file, project)
+ results.extend(result); del result
+ del CreateBasicTemplateXMLFiles
+
+ ImportBasicTemplateXmlFiles = False
+ if ImportBasicTemplateXmlFiles:
+ result = import_basic_template_xml_files(base_project_file, project)
+ results.extend(result); del result
+ del ImportBasicTemplateXmlFiles
+
+ CreateThumbnails = False
+ if CreateThumbnails:
+ result = create_thumbnails(base_project_file, project)
+ results.extend(result); del result
+ del CreateThumbnails
+
+ CreateMaps = False
+ if CreateMaps:
+ result = create_maps(base_project_file, project, dataset=os.path.join(project_gdb, "DisMAP_Regions"))
+ results.extend(result); del result
+ del CreateMaps
+
+ ExportToInportXmlFiles = False
+ if ExportToInportXmlFiles:
+ result = export_to_inport_xml_files(base_project_file, project)
+ results.extend(result); del result
+ del ExportToInportXmlFiles
+
+ except Exception as e:
+ arcpy.AddError(str(e))
+
+ # Variable created in function
+ del project_gdb
+ # Function parameters
+ del base_project_file, project
+
+ except KeyboardInterrupt:
+ raise SystemExit
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ except Exception:
+ pass
+ except:
+ traceback.print_exc()
+ else:
+ try:
+ leave_out_keys = ["leave_out_keys", "results"]
+ remaining_keys = [key for key in locals().keys() if not key.startswith('__') and key not in leave_out_keys]
+ if remaining_keys:
+ arcpy.AddWarning(f"Remaining Keys in '{inspect.stack()[0][3]}': ##--> '{', '.join(remaining_keys)}' <--## Line Number: {traceback.extract_stack()[-1].lineno}")
+ del leave_out_keys, remaining_keys
+ return results if "results" in locals().keys() else ["NOTE!! The 'results' variable not yet set!!"]
+ except:
+ raise Exception(traceback.print_exc())
+ finally:
+ if "results" in locals().keys(): del results
+
+if __name__ == "__main__":
+ try:
+ # Import this Python module
+ import dismap_metadata_processing
+ importlib.reload(dismap_metadata_processing)
+
+ print(f"{'-' * 90}")
+ print(f"Python Script: {os.path.basename(__file__)}")
+ print(f"Location: {os.path.dirname(__file__)}")
+ print(f"Python Version: {sys.version} Environment: {os.path.basename(sys.exec_prefix)}")
+ print(f"{'-' * 90}\n")
+
+ #project = "May 1 2024"
+ #project = "July 1 2024"
+ project = "December 1 2024"
+
+ # Tested on 8/1/2024 -- PASSED
+ main(project=project)
+
+ del project
+
+ from time import localtime, strftime
+
+ print(f"\n{'-' * 90}")
+ print(f"Python script: {os.path.basename(__file__)} successfully completed {strftime('%a %b %d %I:%M %p', localtime())}")
+ print(f"{'-' * 90}")
+ del localtime, strftime
+
+ except:
+ traceback.print_exc()
+ else:
+ leave_out_keys = ["leave_out_keys" ]
+ leave_out_keys.extend([name for name, obj in inspect.getmembers(sys.modules[__name__]) if inspect.isfunction(obj) or inspect.ismodule(obj)])
+ remaining_keys = [key for key in locals().keys() if not key.startswith("__") and key not in leave_out_keys]
+ if remaining_keys: arcpy.AddWarning(f"Remaining Keys: ##--> '{', '.join(remaining_keys)}' <--##")
+ del leave_out_keys, remaining_keys
+ finally:
+ pass
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/copy_initial_data.py b/ArcGIS-Analysis-Python/src/dismap_tools/copy_initial_data.py
new file mode 100644
index 0000000..00e0020
--- /dev/null
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/copy_initial_data.py
@@ -0,0 +1,205 @@
+"""
+Script documentation
+- Tool parameters are accessed using arcpy.GetParameter() or
+ arcpy.GetParameterAsText()
+- Update derived parameter values using arcpy.SetParameter() or
+ arcpy.SetParameterAsText()
+"""
+import os
+
+import arcpy
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def script_tool(project_folder="", csv_data_file="", dataset_shapefiles="", contacts_file=""):
+ """Script code goes below"""
+ try:
+ from zipfile import ZipFile
+ from arcpy import metadata as md
+ from lxml import etree
+ from io import StringIO
+
+ arcpy.env.overwriteOutput = True
+
+ #aprx = arcpy.mp.ArcGISProject("CURRENT")
+ #aprx.save()
+ #project_folder = aprx.homeFolder
+ arcpy.AddMessage(project_folder)
+ out_data_path = rf"{project_folder}\CSV_Data"
+
+ import json
+ json_path = rf"{out_data_path}\root_dict.json"
+ with open(json_path, "r") as json_file:
+ root_dict = json.load(json_file)
+ del json_file
+ del json_path
+ del json
+
+ arcpy.AddMessage(out_data_path)
+ # Change Directory
+ os.chdir(out_data_path)
+ arcpy.AddMessage(f"Un-Zipping files from {os.path.basename(csv_data_file)}")
+ with ZipFile(csv_data_file, mode="r") as archive:
+ for file in archive.namelist():
+ archive.extract(file, ".")
+ del file
+ del archive
+ arcpy.AddMessage(f"Done Un-Zipping files from {os.path.basename(csv_data_file)}")
+ tmp_workspace = arcpy.env.workspace
+ arcpy.env.workspace = rf"{out_data_path}\python"
+
+ csv_files = arcpy.ListFiles("*_survey.csv")
+
+ arcpy.AddMessage("Copying CSV Files and renaming the file")
+ for csv_file in csv_files:
+ arcpy.management.Copy(rf"{out_data_path}\python\{csv_file}", rf"{out_data_path}\{csv_file.replace('_survey', '_IDW')}")
+ del csv_file
+ del csv_files
+
+ arcpy.env.workspace = tmp_workspace
+ del tmp_workspace
+
+ if arcpy.Exists(rf"{out_data_path}\python"):
+ arcpy.AddMessage("Removing the extract folder")
+ arcpy.management.Delete(rf"{out_data_path}\python")
+ else:
+ pass
+
+ arcpy.AddMessage("Adding metadata to CSV file")
+ tmp_workspace = arcpy.env.workspace
+ arcpy.env.workspace = out_data_path
+
+ csv_files = arcpy.ListFiles("*_IDW.csv")
+ for csv_file in csv_files:
+ arcpy.AddMessage(f"\t{csv_file}")
+ dataset_md = md.Metadata(rf"{out_data_path}\{csv_file}")
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ dataset_md.importMetadata(contacts_file, "ARCGIS_METADATA")
+ dataset_md.save()
+ dataset_md.synchronize("OVERWRITE")
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+ target_tree = etree.parse(StringIO(dataset_md.xml), parser=etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
+ target_root = target_tree.getroot()
+ target_root[:] = sorted(target_root, key=lambda x: root_dict[x.tag])
+ new_item_name = target_root.find("Esri/DataProperties/itemProps/itemName").text
+ arcpy.AddMessage(new_item_name)
+## onLineSrcs = target_root.findall("distInfo/distTranOps/onLineSrc")
+## #arcpy.AddMessage(onLineSrcs)
+## for onLineSrc in onLineSrcs:
+## if onLineSrc.find('./protocol').text == "ESRI REST Service":
+## old_linkage_element = onLineSrc.find('./linkage')
+## old_linkage = old_linkage_element.text
+## #arcpy.AddMessage(old_linkage)
+## old_item_name = old_linkage[old_linkage.find("/services/")+len("/services/"):old_linkage.find("/FeatureServer")]
+## new_linkage = old_linkage.replace(old_item_name, new_item_name)
+## #arcpy.AddMessage(new_linkage)
+## old_linkage_element.text = new_linkage
+## #arcpy.AddMessage(old_linkage_element.text)
+## del old_linkage_element
+## del old_item_name, old_linkage, new_linkage
+## onLineSrc.find('./orName').text = f"{new_item_name} Feature Service"
+## del onLineSrcs, new_item_name
+ etree.indent(target_root, space=' ')
+ dataset_md.xml = etree.tostring(target_tree, encoding='UTF-8', method='xml', xml_declaration=True, pretty_print=True)
+ dataset_md.save()
+ dataset_md.synchronize("ALWAYS")
+ dataset_md.save()
+
+ del dataset_md
+
+ del csv_file
+ del csv_files
+
+ arcpy.env.workspace = tmp_workspace
+ del tmp_workspace
+
+ out_data_path = os.path.join(os.path.dirname(project_folder), "Dataset Shapefiles")
+ arcpy.AddMessage(out_data_path)
+
+ # Change Directory
+ os.chdir(out_data_path)
+ arcpy.AddMessage(f"Un-Zipping files from {os.path.basename(dataset_shapefiles)}")
+ with ZipFile(dataset_shapefiles, mode="r") as archive:
+ for file in archive.namelist():
+ archive.extract(file, ".")
+ del file
+ del archive
+ del out_data_path
+
+ # Imports
+ del md
+
+ # Function Variables
+ del project_folder, csv_data_file, dataset_shapefiles, contacts_file
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == "__main__":
+ try:
+ project_folder = arcpy.GetParameterAsText(0)
+ if not project_folder:
+ project_folder = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026")
+ else:
+ pass
+
+ csv_data_file = arcpy.GetParameterAsText(1)
+ if not csv_data_file:
+ csv_data_file = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\Initial Data\\CSV Data 20260201.zip")
+ else:
+ pass
+
+ contacts_file = arcpy.GetParameterAsText(2)
+ if not contacts_file:
+ contacts_file = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\Initial Data\\DisMAP Contacts 20260201.xml")
+ else:
+ pass
+
+ dataset_shapefiles = arcpy.GetParameterAsText(3)
+ if not dataset_shapefiles:
+ dataset_shapefiles = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\Initial Data\\Dataset Shapefiles 20260201.zip")
+ else:
+ pass
+
+ script_tool(project_folder, csv_data_file, dataset_shapefiles, contacts_file)
+
+ arcpy.SetParameterAsText(3, True)
+
+ del project_folder, csv_data_file, dataset_shapefiles, contacts_file
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/create_base_bathymetry.py b/ArcGIS-Analysis-Python/src/dismap_tools/create_base_bathymetry.py
index 5d6982f..541a185 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/create_base_bathymetry.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/create_base_bathymetry.py
@@ -9,13 +9,20 @@
# Copyright: (c) john.f.kennedy 2024
# Licence:
#-------------------------------------------------------------------------------
-import os, sys # built-ins first
-import traceback
-import importlib
-import inspect
+import os
+import sys
import arcpy # third-parties second
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
def raster_properties_report(dataset=""):
try:
if not dataset:
@@ -44,31 +51,22 @@ def raster_properties_report(dataset=""):
del pixel_types
del dataset
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- except SystemExit:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- except Exception:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- except:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk:
- arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
- else:
- pass
- del rk
return True
- finally:
- pass
def create_alasaka_bathymetry(project_folder=""):
try:
@@ -97,7 +95,7 @@ def create_alasaka_bathymetry(project_folder=""):
arcpy.env.outputCoordinateSystem = None
- arcpy.AddMessage(f"Processing Alaska Bathymetry")
+ arcpy.AddMessage("Processing Alaska Bathymetry")
# ###--->>> Setting up the base folder bathymetry for all projects
# Set Alaska Bathymetry
@@ -120,7 +118,7 @@ def create_alasaka_bathymetry(project_folder=""):
enbs_bathymetry = rf"{project_folder}\Bathymetry\Bathymetry.gdb\ENBS_IDW_Bathymetry"
nbs_bathymetry = rf"{project_folder}\Bathymetry\Bathymetry.gdb\NBS_IDW_Bathymetry"
- arcpy.AddMessage(f"Processing Esri Raster Grids")
+ arcpy.AddMessage("Processing Esri Raster Grids")
spatial_ref = arcpy.Describe(ai_bathy).spatialReference.name
arcpy.AddMessage(f"Spatial Reference for {os.path.basename(ai_bathy)}: {spatial_ref}")
@@ -140,61 +138,61 @@ def create_alasaka_bathymetry(project_folder=""):
del spatial_ref
- arcpy.AddMessage(f"Copy AI_IDW_Bathy.grd to AI_IDW_Bathy_Grid")
+ arcpy.AddMessage("Copy AI_IDW_Bathy.grd to AI_IDW_Bathy_Grid")
arcpy.management.CopyRaster(ai_bathy, ai_bathy_grid)
arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
del ai_bathy
- arcpy.AddMessage(f"Copy EBS_IDW_Bathy.grd to EBS_IDW_Bathy_Grid")
+ arcpy.AddMessage("Copy EBS_IDW_Bathy.grd to EBS_IDW_Bathy_Grid")
arcpy.management.CopyRaster(ebs_bathy, ebs_bathy_grid)
arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
del ebs_bathy
- arcpy.AddMessage(f"Copy GOA_IDW_Bathy.grd to GOA_IDW_Bathy_Grid")
+ arcpy.AddMessage("Copy GOA_IDW_Bathy.grd to GOA_IDW_Bathy_Grid")
arcpy.management.CopyRaster(goa_bathy, goa_bathy_grid)
arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
del goa_bathy
- arcpy.AddMessage(f"Converting AI_IDW_Bathy_Grid from positive values to negative")
+ arcpy.AddMessage("Converting AI_IDW_Bathy_Grid from positive values to negative")
tmp_grid = arcpy.sa.Times(ai_bathy_grid, -1)
arcpy.AddMessage("\tTimes: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
tmp_grid.save(ai_bathy_raster)
del tmp_grid
- arcpy.AddMessage(f"Converting EBS_IDW_Bathy_Grid from positive values to negative")
+ arcpy.AddMessage("Converting EBS_IDW_Bathy_Grid from positive values to negative")
tmp_grid = arcpy.sa.Times(ebs_bathy_grid, -1)
arcpy.AddMessage("\tTimes: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
tmp_grid.save(ebs_bathy_raster)
del tmp_grid
- arcpy.AddMessage(f"Setting values equal to and less than 0 in the GOA_IDW_Bathy_Grid Null values")
+ arcpy.AddMessage("Setting values equal to and less than 0 in the GOA_IDW_Bathy_Grid Null values")
tmp_grid = arcpy.sa.SetNull(goa_bathy_grid, goa_bathy_grid, "Value < -1.0")
arcpy.AddMessage("\tSet Null: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
tmp_grid.save(goa_bathy_raster+'_SetNull')
del tmp_grid
- arcpy.AddMessage(f"Converting the GOA_IDW_Bathy_Grid from positive values to negative")
+ arcpy.AddMessage("Converting the GOA_IDW_Bathy_Grid from positive values to negative")
tmp_grid = arcpy.sa.Times(goa_bathy_raster+'_SetNull', -1)
arcpy.AddMessage("\tTimes: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
tmp_grid.save(goa_bathy_raster)
del tmp_grid
- arcpy.AddMessage(f"Deleteing the GOA_IDW_Bathy Null grid")
+ arcpy.AddMessage("Deleteing the GOA_IDW_Bathy Null grid")
arcpy.management.Delete(goa_bathy_raster+'_SetNull')
arcpy.AddMessage("\tDelete: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- arcpy.AddMessage(f"Appending the AI raster to the GOA grid to ensure complete coverage")
+ arcpy.AddMessage("Appending the AI raster to the GOA grid to ensure complete coverage")
extent = arcpy.Describe(goa_bathy_raster).extent
X_Min, Y_Min, X_Max, Y_Max = extent.XMin-(1000 * 366), extent.YMin-(1000 * 80), extent.XMax, extent.YMax
@@ -204,13 +202,13 @@ def create_alasaka_bathymetry(project_folder=""):
arcpy.management.Append(inputs = ai_bathy_raster, target = goa_bathy_raster, schema_type="TEST", field_mapping="", subtype="")
arcpy.AddMessage("\tAppend: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- arcpy.AddMessage(f"Cliping GOA Raster")
+ arcpy.AddMessage("Cliping GOA Raster")
arcpy.management.Clip(goa_bathy_raster, extent, goa_bathy_raster+"_Clip")
arcpy.AddMessage("\tClip: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
del extent
- arcpy.AddMessage(f"Copying GOA Raster")
+ arcpy.AddMessage("Copying GOA Raster")
arcpy.management.CopyRaster(goa_bathy_raster+"_Clip", goa_bathy_raster)
arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
@@ -218,7 +216,7 @@ def create_alasaka_bathymetry(project_folder=""):
arcpy.management.Delete(goa_bathy_raster+"_Clip")
arcpy.AddMessage("\tDelete: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- arcpy.AddMessage(f"Appending the EBS raster to the AI grid to ensure complete coverage")
+ arcpy.AddMessage("Appending the EBS raster to the AI grid to ensure complete coverage")
extent = arcpy.Describe(ai_bathy_raster).extent
X_Min, Y_Min, X_Max, Y_Max = extent.XMin, extent.YMin, extent.XMax, extent.YMax
@@ -229,19 +227,19 @@ def create_alasaka_bathymetry(project_folder=""):
arcpy.management.Append(inputs = ebs_bathy_raster, target = ai_bathy_raster, schema_type="TEST", field_mapping="", subtype="")
arcpy.AddMessage("\tAppend: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- arcpy.AddMessage(f"Cliping AI Raster")
+ arcpy.AddMessage("Cliping AI Raster")
arcpy.management.Clip(ai_bathy_raster, extent, ai_bathy_raster+"_Clip")
arcpy.AddMessage("\tClip: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
del extent
- arcpy.AddMessage(f"Copying AI Raster")
+ arcpy.AddMessage("Copying AI Raster")
arcpy.management.CopyRaster(ai_bathy_raster+"_Clip", ai_bathy_raster)
arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- arcpy.management.Delete(ai_bathy_raster+"_Clip")
- arcpy.AddMessage("\tDelete: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ #arcpy.management.Delete(ai_bathy_raster+"_Clip")
+ #arcpy.AddMessage("\tDelete: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
arcpy.ClearEnvironment("extent")
@@ -257,7 +255,7 @@ def create_alasaka_bathymetry(project_folder=""):
#del region_sr
del region
- arcpy.AddMessage(f"Copy AI_IDW_Bathymetry_Raster to AI_IDW_Bathymetry")
+ arcpy.AddMessage("Copy AI_IDW_Bathymetry_Raster to AI_IDW_Bathymetry")
arcpy.management.CopyRaster(ai_bathy_raster, ai_bathymetry)
arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
@@ -272,7 +270,7 @@ def create_alasaka_bathymetry(project_folder=""):
#del region_sr
del region
- arcpy.AddMessage(f"Copy EBS_IDW_Bathymetry_Raster to EBS_IDW_Bathymetry")
+ arcpy.AddMessage("Copy EBS_IDW_Bathymetry_Raster to EBS_IDW_Bathymetry")
arcpy.management.CopyRaster(ebs_bathy_raster, ebs_bathymetry)
arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
@@ -287,7 +285,7 @@ def create_alasaka_bathymetry(project_folder=""):
#del region_sr
del region
- arcpy.AddMessage(f"Copy EBS_IDW_Bathymetry_Raster to ENBS_Bathymetry")
+ arcpy.AddMessage("Copy EBS_IDW_Bathymetry_Raster to ENBS_Bathymetry")
arcpy.management.CopyRaster(ebs_bathy_raster, enbs_bathymetry)
arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
@@ -302,7 +300,7 @@ def create_alasaka_bathymetry(project_folder=""):
#del region_sr
del region
- arcpy.AddMessage(f"Copy EBS_IDW_Bathymetry_Raster to NBS_Bathymetry")
+ arcpy.AddMessage("Copy EBS_IDW_Bathymetry_Raster to NBS_Bathymetry")
arcpy.management.CopyRaster(ebs_bathy_raster, nbs_bathymetry)
arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
@@ -317,7 +315,7 @@ def create_alasaka_bathymetry(project_folder=""):
#del region_sr
del region
- arcpy.AddMessage(f"Copy GOA_IDW_Bathymetry_Raster to GOA_IDW_Bathymetry")
+ arcpy.AddMessage("Copy GOA_IDW_Bathymetry_Raster to GOA_IDW_Bathymetry")
arcpy.management.CopyRaster(goa_bathy_raster, goa_bathymetry)
arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
@@ -329,49 +327,43 @@ def create_alasaka_bathymetry(project_folder=""):
# ###--->>> Copy rasters for Base Folder to Project Folder Start
- # Set Output Coordinate System
- arcpy.env.outputCoordinateSystem = arcpy.Describe(ai_bathymetry).spatialReference
-
- arcpy.AddMessage(f"Copy AI_IDW_Bathymetry to the Project Bathymetry GDB")
- arcpy.management.CopyRaster(ai_bathymetry, rf"{project_folder}\Bathymetry\Bathymetry.gdb\{os.path.basename(ai_bathymetry)}")
- arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
-
- # Set Output Coordinate System
- arcpy.env.outputCoordinateSystem = arcpy.Describe(ebs_bathymetry).spatialReference
-
- arcpy.AddMessage(f"Copy EBS_IDW_Bathymetry to the Project Bathymetry GDB")
- arcpy.management.CopyRaster(ebs_bathymetry, rf"{project_folder}\Bathymetry\Bathymetry.gdb\{os.path.basename(ebs_bathymetry)}")
- arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
-
- # Set Output Coordinate System
- arcpy.env.outputCoordinateSystem = arcpy.Describe(enbs_bathymetry).spatialReference
-
- arcpy.AddMessage(f"Copy ENBS_IDW_Bathymetry to the Project Bathymetry GDB")
- arcpy.management.CopyRaster(enbs_bathymetry, rf"{project_folder}\Bathymetry\Bathymetry.gdb\{os.path.basename(enbs_bathymetry)}")
- arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
-
- # Set Output Coordinate System
- arcpy.env.outputCoordinateSystem = arcpy.Describe(nbs_bathymetry).spatialReference
-
- arcpy.AddMessage(f"Copy nbs_bathymetry to the Project Bathymetry GDB")
- arcpy.management.CopyRaster(nbs_bathymetry, rf"{project_folder}\Bathymetry\Bathymetry.gdb\{os.path.basename(nbs_bathymetry)}")
- arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
-
- # Set Output Coordinate System
- arcpy.env.outputCoordinateSystem = arcpy.Describe(goa_bathymetry).spatialReference
-
- arcpy.AddMessage(f"Copy GOA_IDW_Bathymetry to the Project Bathymetry GDB")
- arcpy.management.CopyRaster(goa_bathymetry, rf"{project_folder}\Bathymetry\Bathymetry.gdb\{os.path.basename(goa_bathymetry)}")
- arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
-
- gdb = rf"{project_folder}\Bathymetry\Bathymetry.gdb"
- arcpy.AddMessage(f"Compacting the {os.path.basename(gdb)} GDB")
- arcpy.management.Compact(gdb)
- arcpy.AddMessage("\t"+arcpy.GetMessages(0).replace("\n", "\n\t"))
- del gdb
+## # Set Output Coordinate System
+## arcpy.env.outputCoordinateSystem = arcpy.Describe(ai_bathymetry).spatialReference
+##
+## arcpy.AddMessage("Copy AI_IDW_Bathymetry to the Project Bathymetry GDB")
+## arcpy.management.CopyRaster(ai_bathymetry, rf"{project_folder}\Bathymetry\Bathymetry.gdb\{os.path.basename(ai_bathymetry)}")
+## arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+##
+## # Set Output Coordinate System
+## arcpy.env.outputCoordinateSystem = arcpy.Describe(ebs_bathymetry).spatialReference
+##
+## arcpy.AddMessage("Copy EBS_IDW_Bathymetry to the Project Bathymetry GDB")
+## arcpy.management.CopyRaster(ebs_bathymetry, rf"{project_folder}\Bathymetry\Bathymetry.gdb\{os.path.basename(ebs_bathymetry)}")
+## arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+##
+## # Set Output Coordinate System
+## arcpy.env.outputCoordinateSystem = arcpy.Describe(enbs_bathymetry).spatialReference
+##
+## arcpy.AddMessage("Copy ENBS_IDW_Bathymetry to the Project Bathymetry GDB")
+## arcpy.management.CopyRaster(enbs_bathymetry, rf"{project_folder}\Bathymetry\Bathymetry.gdb\{os.path.basename(enbs_bathymetry)}")
+## arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+##
+## # Set Output Coordinate System
+## arcpy.env.outputCoordinateSystem = arcpy.Describe(nbs_bathymetry).spatialReference
+##
+## arcpy.AddMessage("Copy nbs_bathymetry to the Project Bathymetry GDB")
+## arcpy.management.CopyRaster(nbs_bathymetry, rf"{project_folder}\Bathymetry\Bathymetry.gdb\{os.path.basename(nbs_bathymetry)}")
+## arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+##
+## # Set Output Coordinate System
+## arcpy.env.outputCoordinateSystem = arcpy.Describe(goa_bathymetry).spatialReference
+##
+## arcpy.AddMessage("Copy GOA_IDW_Bathymetry to the Project Bathymetry GDB")
+## arcpy.management.CopyRaster(goa_bathymetry, rf"{project_folder}\Bathymetry\Bathymetry.gdb\{os.path.basename(goa_bathymetry)}")
+## arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
gdb = rf"{project_folder}\Bathymetry\Bathymetry.gdb"
- arcpy.AddMessage(f"Compacting the {os.path.basename(gdb)} GDB")
+ arcpy.AddMessage("Compacting the {os.path.basename(gdb)} GDB")
arcpy.management.Compact(gdb)
arcpy.AddMessage("\t"+arcpy.GetMessages(0).replace("\n", "\n\t"))
del gdb
@@ -383,31 +375,22 @@ def create_alasaka_bathymetry(project_folder=""):
# Function parameter
del project_folder
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- except SystemExit:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- except Exception:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- except:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk:
- arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
- else:
- pass
- del rk
return True
- finally:
- pass
def create_hawaii_bathymetry(project_folder=""):
try:
@@ -437,7 +420,7 @@ def create_hawaii_bathymetry(project_folder=""):
hi_bathy_raster = rf"{project_folder}\Bathymetry\Bathymetry.gdb\HI_IDW_Bathy_Raster"
hi_bathymetry = rf"{project_folder}\Bathymetry\Bathymetry.gdb\HI_IDW_Bathymetry"
- arcpy.AddMessage(f"Converting Hawaii Polygon Grid to a Raster")
+ arcpy.AddMessage("Converting Hawaii Polygon Grid to a Raster")
arcpy.conversion.PolygonToRaster(in_features = hi_bathy_grid, value_field = "Depth_MEDI", out_rasterdataset = hi_bathy_raster, cell_assignment="CELL_CENTER", priority_field="NONE", cellsize="500")
arcpy.AddMessage("\tPolygon To Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
@@ -447,16 +430,16 @@ def create_hawaii_bathymetry(project_folder=""):
tmp_grid.save(hi_bathymetry)
del tmp_grid
- arcpy.AddMessage(f"Copy Hawaii Raster to the Bathymetry GDB")
-
- arcpy.management.CopyRaster(hi_bathymetry, rf"{project_folder}\Bathymetry\Bathymetry.gdb\HI_IDW_Bathymetry")
- arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
-
- gdb = rf"{project_folder}\Bathymetry\Bathymetry.gdb"
- arcpy.AddMessage(f"Compacting the {os.path.basename(gdb)} GDB")
- arcpy.management.Compact(gdb)
- arcpy.AddMessage("\t"+arcpy.GetMessages(0).replace("\n", "\n\t"))
- del gdb
+## arcpy.AddMessage("Copy Hawaii Raster to the Bathymetry GDB")
+##
+## arcpy.management.CopyRaster(hi_bathymetry, rf"{project_folder}\Bathymetry\Bathymetry.gdb\HI_IDW_Bathymetry")
+## arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+##
+## gdb = rf"{project_folder}\Bathymetry\Bathymetry.gdb"
+## arcpy.AddMessage(f"Compacting the {os.path.basename(gdb)} GDB")
+## arcpy.management.Compact(gdb)
+## arcpy.AddMessage("\t"+arcpy.GetMessages(0).replace("\n", "\n\t"))
+## del gdb
gdb = rf"{project_folder}\Bathymetry\Bathymetry.gdb"
arcpy.AddMessage(f"Compacting the {os.path.basename(gdb)} GDB")
@@ -469,31 +452,22 @@ def create_hawaii_bathymetry(project_folder=""):
# Function parameter
del project_folder
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- except SystemExit:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- except Exception:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- except:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk:
- arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
- else:
- pass
- del rk
return True
- finally:
- pass
def gebco_bathymetry(project_folder=""):
try:
@@ -523,7 +497,7 @@ def gebco_bathymetry(project_folder=""):
arcpy.env.outputCoordinateSystem = None
- arcpy.AddMessage(f"Processing GEBCO Raster Grids")
+ arcpy.AddMessage("Processing GEBCO Raster Grids")
#gebco_dict = get_dms_points_for_gebco(project_gdb)
gebco_dict = {
@@ -537,7 +511,7 @@ def gebco_bathymetry(project_folder=""):
'WC_TRI_IDW' : 'gebco_2022_n49.2_s36.0_w-126.6_e-121.6.asc',
}
- arcpy.AddMessage(f"Processing Regions")
+ arcpy.AddMessage("Processing Regions")
# Start looping over the datasets array as we go region by region.
for table_name in gebco_dict:
gebco_file_name = gebco_dict[table_name]
@@ -618,31 +592,22 @@ def gebco_bathymetry(project_folder=""):
# Function parameter
del project_folder
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- except SystemExit:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- except Exception:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- except:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk:
- arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
- else:
- pass
- del rk
return True
- finally:
- pass
def main(project_folder=""):
try:
@@ -651,7 +616,7 @@ def main(project_folder=""):
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
@@ -662,7 +627,7 @@ def main(project_folder=""):
if not arcpy.Exists(rf"{project_folder}\Scratch"):
os.makedirs(rf"{project_folder}\Scratch")
if not arcpy.Exists(rf"{project_folder}\Scratch\scratch.gdb"):
- arcpy.management.CreateFileGDB(rf"{project_folder}\Scratch", f"scratch")
+ arcpy.management.CreateFileGDB(rf"{project_folder}\Scratch", "scratch")
# Base Bathymetry Folder
if not os.path.isdir(rf"{project_folder}\Bathymetry"):
@@ -692,7 +657,7 @@ def main(project_folder=""):
else:
pass
- #test = True
+ #test = False
# Process base Hawaii bathymetry
if test:
result = create_hawaii_bathymetry(project_folder)
@@ -730,29 +695,29 @@ def main(project_folder=""):
del elapse_time, end_time, start_time
del gmtime, localtime, strftime, time
- except:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
return True
- finally:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk:
- arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
- else:
- pass
- del rk
if __name__ == '__main__':
try:
- arcgis_folder = rf"{os.path.expanduser('~')}\Documents\ArcGIS"
- sys.path.append(arcgis_folder)
-
project_folder = arcpy.GetParameterAsText(0)
if not project_folder:
- project_folder = rf"{arcgis_folder}\Projects\DisMAP\ArcGIS-Analysis-Python"
+ project_folder = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python")
else:
pass
@@ -761,13 +726,17 @@ def main(project_folder=""):
del result
# Declared Variables
- del project_folder, arcgis_folder
+ del project_folder
- except:
- arcpy.AddMessage(arcpy.GetMessages(0))
- traceback.print_exc()
- else:
- pass
- #print(f"Remaining Keys: ##--> '{', '.join([key for key in locals().keys() if not key.startswith('__')])}' <--##")
- finally:
- pass
\ No newline at end of file
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/create_data_dictionary_json_files.py b/ArcGIS-Analysis-Python/src/dismap_tools/create_data_dictionary_json_files.py
index c7c4b21..3ee0014 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/create_data_dictionary_json_files.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/create_data_dictionary_json_files.py
@@ -6,8 +6,10 @@
- Update derived parameter values using arcpy.SetParameter() or
arcpy.SetParameterAsText()
"""
-import arcpy, os
+import arcpy
+import os
import traceback
+import inspect
def script_tool(project_gdb=""):
"""Script code goes below"""
@@ -20,7 +22,7 @@ def script_tool(project_gdb=""):
# Define variables
project_folder = os.path.dirname(project_gdb)
scratch_folder = rf"{project_folder}\Scratch"
- scratch_gdb = rf"{scratch_folder}\scratch.gdb"
+ scratch_gdb = os.path.join(scratch_folder, "scratch.gdb")
# Set the workspace environment to local file geodatabase
arcpy.env.workspace = project_gdb
@@ -32,7 +34,25 @@ def script_tool(project_gdb=""):
arcpy.AddMessage(f"\n{'--Start' * 10}--\n")
arcpy.AddMessage(f"Creating Table and Field definitions for: {os.path.basename(project_gdb)}")
- field_definitions = {
+ field_definitions = { "Bio_Inc_Dec": {
+ "field_aliasName": "Bio_Inc_Dec",
+ "field_baseName": "Bio_Inc_Dec",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 30,
+ "field_name": "Bio_Inc_Dec",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Bio_Inc_Dec",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Bio_Inc_Dec"
+ }
+ },
"CSVFile": {
"field_aliasName": "CSV File",
"field_baseName": "CSVFile",
@@ -698,6 +718,25 @@ def script_tool(project_gdb=""):
"udom": "Haul Bin"
}
},
+ "Haul_Inc_Dec": {
+ "field_aliasName": "Haul_Inc_Dec",
+ "field_baseName": "Haul_Inc_Dec",
+ "field_defaultValue": "null",
+ "field_domain": "",
+ "field_editable": "true",
+ "field_isNullable": "true",
+ "field_length": 30,
+ "field_name": "Haul_Inc_Dec",
+ "field_precision": 0,
+ "field_required": "true",
+ "field_scale": 0,
+ "field_type": "String",
+ "field_attrdef": "Haul_Inc_Dec",
+ "field_attrdefs": "DisMAP Project GDB Data Dictionary",
+ "field_attrdomv": {
+ "udom": "Haul_Inc_Dec"
+ }
+ },
"HaulProportion": {
"field_aliasName": "Haul Proportion",
"field_baseName": "HaulProportion",
@@ -717,6 +756,10 @@ def script_tool(project_gdb=""):
"udom": "Haul Proportion"
}
},
+
+
+
+
"HighPS": {
"field_aliasName": "HighPS",
"field_baseName": "HighPS",
@@ -1850,7 +1893,8 @@ def script_tool(project_gdb=""):
"field_precision": 0,
"field_required": "true",
"field_scale": 0,
- "field_type": "String",
+ #"field_type": "String",
+ "field_type": "Integer",
"field_attrdef": "Year",
"field_attrdefs": "DisMAP Project GDB Data Dictionary",
"field_attrdomv": {
@@ -1961,9 +2005,10 @@ def script_tool(project_gdb=""):
"Depth"]
_Species_Filter = ["Species", "CommonName", "TaxonomicGroup", "FilterRegion",
"FilterSubRegion", "ManagementBody", "ManagementPlan", "DistributionProjectName"]
- _SpeciesPersistenceIndicatorTrend = ["Region", "SurveyName", "Species", "CommonName", "TrendCategory", "Notes"]
+ _SpeciesPersistenceIndicatorTrend = ["Region", "SurveyName", "Species", "CommonName", "TrendCategory", "Notes", "Haul_Inc_Dec", "Bio_Inc_Dec"]
_SpeciesPersistenceIndicatorPercentileBin = ["Region", "SurveyName", "Year", "Species", "CommonName", "PercentileBin", "WTCPUE", "HaulProportion", "HaulBin"]
+
#datasets_table = arcpy.ListTables("Datasets")[0]
#datasets_table_fields = [f.name for f in arcpy.ListFields(datasets_table) if f.type not in ["Geometry", "OID"] and f.name not in ["Shape_Area", "Shape_Length"]]
@@ -2171,8 +2216,6 @@ def script_tool(project_gdb=""):
# Function parameters
del project_gdb
- except KeyboardInterrupt:
- sys.exit()
except arcpy.ExecuteWarning:
arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
arcpy.AddWarning(arcpy.GetMessages(1))
@@ -2180,22 +2223,21 @@ def script_tool(project_gdb=""):
arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
- sys.exit()
+
except SystemExit as se:
arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
except Exception as e:
arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
- sys.exit()
- except:
+ except: # noqa: E722
arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
- sys.exit()
else:
# While in development, leave here. For test, move to finally
rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
return True
finally:
pass
@@ -2205,7 +2247,7 @@ def script_tool(project_gdb=""):
project_gdb = arcpy.GetParameterAsText(0)
if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025\August 1 2025.gdb"
+ project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\February 1 2026\February 1 2026.gdb"
else:
pass
@@ -2216,7 +2258,7 @@ def script_tool(project_gdb=""):
except SystemExit:
pass
- except:
+ except: # noqa: E722
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
else:
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/create_indicators_table_director.py b/ArcGIS-Analysis-Python/src/dismap_tools/create_indicators_table_director.py
index 4ad468e..4c816a6 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/create_indicators_table_director.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/create_indicators_table_director.py
@@ -9,12 +9,22 @@
# Copyright: (c) john.f.kennedy 2024
# Licence:
#-------------------------------------------------------------------------------
-import os, sys # built-ins first
+import os
+import sys
import traceback
-import inspect
import arcpy # third-parties second
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ #filename = sys.path[0] + os.sep + f"{os.path.basename(__file__)}"
+ filename = os.path.basename(__file__)
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
def director(project_gdb="", Sequential=True, table_names=[]):
try:
# Imports
@@ -50,14 +60,14 @@ def director(project_gdb="", Sequential=True, table_names=[]):
# Sequential Processing
if Sequential:
- arcpy.AddMessage(f"Sequential Processing")
+ arcpy.AddMessage("Sequential Processing")
for i in range(0, len(table_names)):
arcpy.AddMessage(f"Processing: {table_names[i]}")
table_name = table_names[i]
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
try:
worker(region_gdb=region_gdb)
- except:
+ except: # noqa: E722
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
sys.exit()
@@ -68,11 +78,11 @@ def director(project_gdb="", Sequential=True, table_names=[]):
# Non-Sequential Processing
if not Sequential:
- arcpy.AddMessage(f"Non-Sequential Processing")
+ arcpy.AddMessage("Non-Sequential Processing")
# Imports
import multiprocessing
from time import time, localtime, strftime, sleep, gmtime
- arcpy.AddMessage(f"Start multiprocessing using the ArcGIS Pro pythonw.exe.")
+ arcpy.AddMessage("Start multiprocessing using the ArcGIS Pro pythonw.exe.")
#Set multiprocessing exe in case we're running as an embedded process, i.e ArcGIS
#get_install_path() uses a registry query to figure out 64bit python exe if available
multiprocessing.set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
@@ -83,17 +93,17 @@ def director(project_gdb="", Sequential=True, table_names=[]):
#Create a pool of workers, keep one cpu free for surfing the net.
#Let each worker process only handle 1 task before being restarted (in case of nasty memory leaks)
with multiprocessing.Pool(processes=_processes, maxtasksperchild=1) as pool:
- arcpy.AddMessage(f"\tPrepare arguments for processing")
+ arcpy.AddMessage("\tPrepare arguments for processing")
# Use apply_async so we can handle exceptions gracefully
jobs={}
for i in range(0, len(table_names)):
try:
arcpy.AddMessage(f"Processing: {table_names[i]}")
table_name = table_names[i]
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
jobs[table_name] = pool.apply_async(worker, [region_gdb])
del table_name, region_gdb
- except:
+ except: # noqa: E722
pool.terminate()
traceback.print_exc()
sys.exit()
@@ -108,7 +118,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
end_time = time()
elapse_time = end_time - start_time
arcpy.AddMessage(f"\nStart Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
- arcpy.AddMessage(f"Have the workers finished?")
+ arcpy.AddMessage("Have the workers finished?")
finish_time = strftime('%a %b %d %I:%M %p', localtime())
time_elapsed = u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time)))
arcpy.AddMessage(f"It's {finish_time}\n{time_elapsed}")
@@ -121,7 +131,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
try:
# wait for and get the result from the task
result.get()
- except:
+ except: # noqa: E722
pool.terminate()
traceback.print_exc()
sys.exit()
@@ -139,11 +149,11 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del result_completed
del start_time
del all_finished
- arcpy.AddMessage(f"\tClose the process pool")
+ arcpy.AddMessage("\tClose the process pool")
# close the process pool
pool.close()
# wait for all tasks to complete and processes to close
- arcpy.AddMessage(f"\tWait for all tasks to complete and processes to close")
+ arcpy.AddMessage("\tWait for all tasks to complete and processes to close")
pool.join()
# Just in case
pool.terminate()
@@ -151,7 +161,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del jobs
del _processes
del time, multiprocessing, localtime, strftime, sleep, gmtime
- arcpy.AddMessage(f"\tDone with multiprocessing Pool")
+ arcpy.AddMessage("\tDone with multiprocessing Pool")
# Post-Processing
arcpy.AddMessage("Post-Processing Begins")
@@ -165,7 +175,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del dirpath, dirnames, filenames
del walk
for dataset in datasets:
- datasets_short_path = f"{os.path.basename(os.path.dirname(os.path.dirname(dataset)))}\{os.path.basename(os.path.dirname(dataset))}\{os.path.basename(dataset)}"
+ datasets_short_path = f".. {'/'.join(dataset.split(os.sep)[-4:])}"
dataset_name = os.path.basename(dataset)
region_gdb = os.path.dirname(dataset)
arcpy.AddMessage(f"\tDataset: '{dataset_name}'")
@@ -190,36 +200,22 @@ def director(project_gdb="", Sequential=True, table_names=[]):
# Function Parameters
del project_gdb, Sequential, table_names
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
- traceback.print_exc()
- sys.exit()
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
def process_indicator_tables(project_gdb=""):
try:
@@ -235,8 +231,8 @@ def process_indicator_tables(project_gdb=""):
arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
project_folder = os.path.dirname(project_gdb)
- scratch_folder = rf"{project_folder}\Scratch"
- scratch_workspace = rf"{project_folder}\Scratch\scratch.gdb"
+ scratch_folder = os.path.join(project_folder, "Scratch")
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
csv_data_folder = rf"{project_folder}\CSV_Data"
arcpy.env.workspace = project_gdb
@@ -265,7 +261,7 @@ def process_indicator_tables(project_gdb=""):
in_table_path = rf"{project_gdb}\{in_table}"
del in_table
- arcpy.AddMessage(f"\tUpdating field values to replace None with empty string")
+ arcpy.AddMessage("\tUpdating field values to replace None with empty string")
fields = [f.name for f in arcpy.ListFields(in_table_path) if f.type == "String"]
#for field in fields:
@@ -325,36 +321,22 @@ def process_indicator_tables(project_gdb=""):
# Function Parameters
del project_gdb
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
- traceback.print_exc()
- sys.exit()
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
def script_tool(project_gdb=""):
try:
@@ -364,7 +346,7 @@ def script_tool(project_gdb=""):
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
@@ -386,10 +368,8 @@ def script_tool(project_gdb=""):
director(project_gdb=project_gdb, Sequential=True, table_names=["AI_IDW", "HI_IDW",])
elif not Test:
pass
- #director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", "EBS_IDW", "ENBS_IDW", "GOA_IDW", "NBS_IDW",])
- #director(project_gdb=project_gdb, Sequential=False, table_names=["SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW",])
- #director(project_gdb=project_gdb, Sequential=False, table_names=["GMEX_IDW", "WC_ANN_IDW", "WC_TRI_IDW", "NEUS_FAL_IDW", "NEUS_SPR_IDW",])
- #director(project_gdb=project_gdb, Sequential=False, table_names=["HI_IDW", "NEUS_FAL_IDW", "NEUS_SPR_IDW",])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW",])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",])
else:
pass
del Test
@@ -402,7 +382,7 @@ def script_tool(project_gdb=""):
pass
del CombineIndicatorTables
- except:
+ except: # noqa: E722
traceback.print_exc()
sys.exit()
@@ -425,48 +405,46 @@ def script_tool(project_gdb=""):
del elapse_time, end_time, start_time
del gmtime, localtime, strftime, time
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- if "Test" in locals().keys(): del Test
if __name__ == '__main__':
try:
project_gdb = arcpy.GetParameterAsText(0)
if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025\August 1 2025.gdb"
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
else:
pass
+
script_tool(project_gdb)
+
arcpy.SetParameterAsText(1, "Result")
+
del project_gdb
- except:
- traceback.print_exc()
- else:
- pass
- finally:
- pass
\ No newline at end of file
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/create_indicators_table_worker.py b/ArcGIS-Analysis-Python/src/dismap_tools/create_indicators_table_worker.py
index 35a90a1..fdd1c6e 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/create_indicators_table_worker.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/create_indicators_table_worker.py
@@ -9,7 +9,8 @@
# Copyright: (c) john.f.kennedy 2024
# Licence:
#-------------------------------------------------------------------------------
-import os, sys # built-ins first
+import os
+import sys
import traceback
import inspect
@@ -88,14 +89,16 @@ def printRowContent(region_indicators):
arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
- except:
+ except: # noqa: E722
arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
else:
# While in development, leave here. For test, move to finally
rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
return True
finally:
pass
@@ -187,7 +190,8 @@ def worker(region_gdb=""):
#arcpy.AddMessage(datecode)
#arcpy.AddMessage(dismap.date_code(datecode))
- arcpy.env.cellSize = cellsize; del cellsize
+ arcpy.env.cellSize = cellsize
+ del cellsize
# Region Raster Mask
datasetcode_raster_mask = os.path.join(region_gdb, f"{table_name}_Raster_Mask")
@@ -214,7 +218,7 @@ def worker(region_gdb=""):
input_rasters = {}
- arcpy.AddMessage(f"\tCreate a list of input biomass raster path locations")
+ arcpy.AddMessage("\tCreate a list of input biomass raster path locations")
#fields = "DatasetCode;Region;Season;Species;CommonName;SpeciesCommonName;CoreSpecies;Year;StdTime;Variable;Value;Dimensions"
#fields = fields.split(";")
@@ -258,7 +262,7 @@ def worker(region_gdb=""):
# Start with empty row_values list of list
row_values = []
- arcpy.AddMessage(f"Interate over the species names")
+ arcpy.AddMessage("Interate over the species names")
for variable in sorted(input_rasters):
@@ -297,7 +301,7 @@ def worker(region_gdb=""):
first_year = year if year < first_year else first_year
#arcpy.AddMessage(f"\t{first_year}, {year} {first_year == year}")
- arcpy.AddMessage(f"\t> Calculating biomassArray")
+ arcpy.AddMessage("\t> Calculating biomassArray")
biomassArray = arcpy.RasterToNumPyArray(input_raster_path, nodata_to_value=np.nan)
biomassArray[biomassArray <= 0.0] = np.nan
@@ -311,7 +315,7 @@ def worker(region_gdb=""):
# ###--->>> Biomass End
- arcpy.AddMessage(f"\t> Calculating latitudeArray")
+ arcpy.AddMessage("\t> Calculating latitudeArray")
# ###--->>> Latitude Start
#CenterOfGravityLatitude = None
@@ -413,7 +417,7 @@ def worker(region_gdb=""):
# ###--->>> Latitude End
- arcpy.AddMessage(f"\t> Calculating longitudeArray")
+ arcpy.AddMessage("\t> Calculating longitudeArray")
# ###--->>> Longitude Start
#CenterOfGravityLongitude = None
@@ -517,7 +521,7 @@ def worker(region_gdb=""):
# ###--->>> Longitude End
- arcpy.AddMessage(f"\t> Calculating bathymetryArray")
+ arcpy.AddMessage("\t> Calculating bathymetryArray")
# ###--->>> Center of Gravity Depth (Bathymetry) Start
@@ -646,7 +650,7 @@ def worker(region_gdb=""):
arcpy.AddMessage('Something wrong with biomass raster')
- arcpy.AddMessage(f"\t> Assigning variables to row values")
+ arcpy.AddMessage("\t> Assigning variables to row values")
# Clean-up
del maximumBiomass
@@ -712,9 +716,12 @@ def worker(region_gdb=""):
del raster_years
del first_year
- if "first_year_offset_latitude" in locals(): del first_year_offset_latitude
- if "first_year_offset_longitude" in locals(): del first_year_offset_longitude
- if "first_year_offset_depth" in locals(): del first_year_offset_depth
+ if "first_year_offset_latitude" in locals():
+ del first_year_offset_latitude
+ if "first_year_offset_longitude" in locals():
+ del first_year_offset_longitude
+ if "first_year_offset_depth" in locals():
+ del first_year_offset_depth
del region_bathymetry, region_latitude, region_longitude, input_rasters
@@ -732,7 +739,7 @@ def worker(region_gdb=""):
try:
row = [None if x != x else x for x in row]
cursor.insertRow(row)
- except:
+ except: # noqa: E722
# Get the traceback object
tb = sys.exc_info()[2]
tbinfo = traceback.format_tb(tb)[0]
@@ -795,14 +802,16 @@ def worker(region_gdb=""):
arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
- except:
+ except: # noqa: E722
arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
else:
# While in development, leave here. For test, move to finally
rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
return True
finally:
pass
@@ -825,7 +834,7 @@ def preprocessing(project_gdb="", table_names="", clear_folder=True):
# Set varaibales
project_folder = os.path.dirname(project_gdb)
scratch_folder = rf"{project_folder}\Scratch"
- scratch_workspace = rf"{project_folder}\Scratch\scratch.gdb"
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
# Clear Scratch Folder
#ClearScratchFolder = True
@@ -842,7 +851,7 @@ def preprocessing(project_gdb="", table_names="", clear_folder=True):
del project_folder, scratch_workspace
if not table_names:
- table_names = [row[0] for row in arcpy.da.SearchCursor(f"{project_gdb}\Datasets",
+ table_names = [row[0] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"),
"TableName",
where_clause = "TableName LIKE '%_IDW'")]
else:
@@ -851,15 +860,15 @@ def preprocessing(project_gdb="", table_names="", clear_folder=True):
for table_name in table_names:
arcpy.AddMessage(f"Pre-Processing: {table_name}")
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
- region_scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ region_scratch_workspace = os.path.join(scratch_folder, f"{table_name}", "scratch.gdb")
# Create Scratch Workspace for Region
if not arcpy.Exists(region_scratch_workspace):
- os.makedirs(rf"{scratch_folder}\{table_name}")
+ os.makedirs(os.path.join(scratch_folder, table_name))
if not arcpy.Exists(region_scratch_workspace):
arcpy.AddMessage(f"Create File GDB: '{table_name}'")
- arcpy.management.CreateFileGDB(rf"{scratch_folder}\{table_name}", f"scratch")
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, f"{table_name}"), "scratch")
arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
del region_scratch_workspace
# # # CreateFileGDB
@@ -937,14 +946,16 @@ def preprocessing(project_gdb="", table_names="", clear_folder=True):
arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
- except:
+ except: # noqa: E722
arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
else:
# While in development, leave here. For test, move to finally
rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
return True
finally:
pass
@@ -958,7 +969,7 @@ def script_tool(project_gdb=""):
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
@@ -974,7 +985,7 @@ def script_tool(project_gdb=""):
table_names = ["HI_IDW",]
- #preprocessing(project_gdb=project_gdb, table_names=table_names, clear_folder=True)
+ preprocessing(project_gdb=project_gdb, table_names=table_names, clear_folder=True)
for table_name in table_names:
region_gdb = rf"{os.path.dirname(project_gdb)}\Scratch\{table_name}.gdb"
@@ -1028,14 +1039,16 @@ def script_tool(project_gdb=""):
arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
- except:
+ except: # noqa: E722
arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
else:
# While in development, leave here. For test, move to finally
rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
return True
finally:
pass
@@ -1044,13 +1057,13 @@ def script_tool(project_gdb=""):
try:
project_gdb = arcpy.GetParameterAsText(0)
if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025\August 1 2025.gdb"
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
else:
pass
script_tool(project_gdb)
arcpy.SetParameterAsText(1, "Result")
del project_gdb
- except:
+ except: # noqa: E722
traceback.print_exc()
else:
pass
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/create_metadata_json_files.py b/ArcGIS-Analysis-Python/src/dismap_tools/create_metadata_json_files.py
index f595cb6..d7677c7 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/create_metadata_json_files.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/create_metadata_json_files.py
@@ -6,7 +6,10 @@
- Update derived parameter values using arcpy.SetParameter() or
arcpy.SetParameterAsText()
"""
-import os, sys, traceback, inspect
+import os
+import sys
+import traceback
+import inspect
import arcpy
@@ -19,7 +22,7 @@ def script_tool(project_gdb=""):
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
@@ -157,23 +160,8 @@ def script_tool(project_gdb=""):
"discKeys" : 7,
"keyword" : 0,
"thesaName" : 1,
- "resTitle" : 0,
- "date" : 1,
- "createDate" : 0,
- "pubDate" : 1,
- "reviseDate" : 2,
- "citOnlineRes" : 2,
- "linkage" : 0,
- "orFunct" : 1,
- "OnFunctCd" : 0,
- "thesaLang" : 2,
- "languageCode" : 0,
- "countryCode" : 1,
- "themeKeys" : 8,
- "keyword" : 0,
- "thesaName" : 1,
- "resTitle" : 0,
- "date" : 1,
+ "resTitle" : 0, # noqa: F601
+ "date" : 1, # noqa: F601
"createDate" : 0,
"pubDate" : 1,
"reviseDate" : 2,
@@ -184,55 +172,69 @@ def script_tool(project_gdb=""):
"thesaLang" : 2,
"languageCode" : 0,
"countryCode" : 1,
+ "themeKeys" : 8, # noqa: F601
+ "keyword" : 0, # noqa: F601
+ "thesaName" : 1, # noqa: F601
+ "resTitle" : 0, # noqa: F601
+ "date" : 1, # noqa: F601
+ "createDate" : 0, # noqa: F601
+ "pubDate" : 1, # noqa: F601
+ "reviseDate" : 2, # noqa: F601
+ "citOnlineRes" : 2, # noqa: F601
+ "linkage" : 0, # noqa: F601
+ "orFunct" : 1, # noqa: F601
+ "OnFunctCd" : 0, # noqa: F601
+ "thesaLang" : 2, # noqa: F601
+ "languageCode" : 0, # noqa: F601
+ "countryCode" : 1, # noqa: F601
"placeKeys" : 9,
- "keyword" : 0,
- "thesaName" : 1,
- "resTitle" : 0,
- "date" : 1,
- "createDate" : 0,
- "pubDate" : 1,
- "reviseDate" : 2,
- "citOnlineRes" : 2,
- "linkage" : 0,
- "orFunct" : 1,
- "OnFunctCd" : 0,
- "thesaLang" : 2,
- "languageCode" : 0,
- "countryCode" : 1,
- "tempKeys" : 10,
- "keyword" : 0,
- "thesaName" : 1,
- "resTitle" : 0,
- "date" : 1,
- "createDate" : 0,
- "pubDate" : 1,
- "reviseDate" : 2,
- "citOnlineRes" : 2,
- "linkage" : 0,
- "orFunct" : 1,
- "OnFunctCd" : 0,
- "thesaLang" : 2,
- "languageCode" : 0,
- "countryCode" : 1,
- "otherKeys" : 11,
- "keyword" : 0,
- "thesaName" : 1,
- "resTitle" : 0,
- "date" : 1,
- "createDate" : 0,
- "pubDate" : 1,
- "reviseDate" : 2,
- "citOnlineRes" : 2,
- "linkage" : 0,
- "orFunct" : 1,
- "OnFunctCd" : 0,
- "thesaLang" : 2,
- "languageCode" : 0,
- "countryCode" : 1,
- "idPoC" : 11,
- "resMaint" : 12,
-
- "tpCat" : 18,
+ "keyword" : 0, # noqa: F601
+ "thesaName" : 1, # noqa: F601
+ "resTitle" : 0, # noqa: F601
+ "date" : 1, # noqa: F601
+ "createDate" : 0, # noqa: F601
+ "pubDate" : 1, # noqa: F601
+ "reviseDate" : 2, # noqa: F601
+ "citOnlineRes" : 2, # noqa: F601
+ "linkage" : 0, # noqa: F601
+ "orFunct" : 1, # noqa: F601
+ "OnFunctCd" : 0, # noqa: F601
+ "thesaLang" : 2, # noqa: F601
+ "languageCode" : 0, # noqa: F601
+ "countryCode" : 1, # noqa: F601
+ "tempKeys" : 10, # noqa: F601
+ "keyword" : 0, # noqa: F601
+ "thesaName" : 1, # noqa: F601
+ "resTitle" : 0, # noqa: F601
+ "date" : 1, # noqa: F601
+ "createDate" : 0, # noqa: F601
+ "pubDate" : 1, # noqa: F601
+ "reviseDate" : 2, # noqa: F601
+ "citOnlineRes" : 2, # noqa: F601
+ "linkage" : 0, # noqa: F601
+ "orFunct" : 1, # noqa: F601
+ "OnFunctCd" : 0, # noqa: F601
+ "thesaLang" : 2, # noqa: F601
+ "languageCode" : 0, # noqa: F601
+ "countryCode" : 1, # noqa: F601
+ "otherKeys" : 11, # noqa: F601
+ "keyword" : 0, # noqa: F601
+ "thesaName" : 1, # noqa: F601
+ "resTitle" : 0, # noqa: F601
+ "date" : 1, # noqa: F601
+ "createDate" : 0, # noqa: F601
+ "pubDate" : 1, # noqa: F601
+ "reviseDate" : 2, # noqa: F601
+ "citOnlineRes" : 2, # noqa: F601
+ "linkage" : 0, # noqa: F601
+ "orFunct" : 1, # noqa: F601
+ "OnFunctCd" : 0, # noqa: F601
+ "thesaLang" : 2, # noqa: F601
+ "languageCode" : 0, # noqa: F601
+ "countryCode" : 1, # noqa: F601
+ "idPoC" : 11, # noqa: F601
+ "resMaint" : 12, # noqa: F601
+ "tpCat" : 18, # noqa: F601
}
import json
@@ -348,30 +350,30 @@ def script_tool(project_gdb=""):
"adminArea" : 2, "postCode" : 3, "eMailAdd" : 4,
"country" : 5, "cntPhone" : 1, "voiceNum" : 0,
"faxNum" : 1, "cntHours" : 2, "cntOnlineRes" : 3,
- "linkage" : 0, "protocol" : 1, "orName" : 2,
- "orDesc" : 3, "orFunct" : 4, "OnFunctCd" : 0,
- "editorSave" : 6, "displayName" : 7, "role" : 8,
+ "linkage" : 0, "protocol" : 1, "orName" : 2, # noqa: F601
+ "orDesc" : 3, "orFunct" : 4, "OnFunctCd" : 0, # noqa: F601
+ "editorSave" : 6, "displayName" : 7, "role" : 8, # noqa: F601
"RoleCd" : 0,
"srcMedName" : 7,
"MedNameCd" : 0,
"prcStep" : 3,
"stepDesc" : 0,
"stepProc" : 1,
- "editorSource" : 0, "editorDigest" : 1,"rpIndName" : 2,
- "rpOrgName" : 3, "rpPosName" : 4, "rpCntInfo" : 5,
- "cntAddress" : 0, "delPoint" : 0, "city" : 1,
- "adminArea" : 2, "postCode" : 3, "eMailAdd" : 4,
- "country" : 5, "cntPhone" : 1, "voiceNum" : 0,
- "faxNum" : 1, "cntHours" : 2, "cntOnlineRes" : 3,
- "linkage" : 0, "protocol" : 1, "orName" : 2,
- "orDesc" : 3, "orFunct" : 4, "OnFunctCd" : 0,
- "editorSave" : 6, "displayName" : 7, "role" : 8,
- "RoleCd" : 0,
+ "editorSource" : 0, "editorDigest" : 1,"rpIndName" : 2, # noqa: F601
+ "rpOrgName" : 3, "rpPosName" : 4, "rpCntInfo" : 5, # noqa: F601
+ "cntAddress" : 0, "delPoint" : 0, "city" : 1, # noqa: F601
+ "adminArea" : 2, "postCode" : 3, "eMailAdd" : 4, # noqa: F601
+ "country" : 5, "cntPhone" : 1, "voiceNum" : 0, # noqa: F601
+ "faxNum" : 1, "cntHours" : 2, "cntOnlineRes" : 3, # noqa: F601
+ "linkage" : 0, "protocol" : 1, "orName" : 2, # noqa: F601
+ "orDesc" : 3, "orFunct" : 4, "OnFunctCd" : 0, # noqa: F601
+ "editorSave" : 6, "displayName" : 7, "role" : 8, # noqa: F601
+ "RoleCd" : 0, # noqa: F601
"stepDateTm" : 2,
- "cntOnlineRes" : 3, "linkage" : 0,
- "protocol" : 1, "orName" : 2, "orDesc" : 3,
- "orFunct" : 4, "OnFunctCd" : 0,
+ "cntOnlineRes" : 3, "linkage" : 0, # noqa: F601
+ "protocol" : 1, "orName" : 2, "orDesc" : 3, # noqa: F601
+ "orFunct" : 4, "OnFunctCd" : 0, # noqa: F601
}
import json
@@ -428,12 +430,12 @@ def script_tool(project_gdb=""):
"unitsODist" : 0,
"transSize" : 1,
"onLineSrc" : 2,
- "linkage" : 0,
+ "linkage" : 0, # noqa: F601
"protocol" : 1,
- "orName" : 2,
- "orDesc" : 3,
- "orFunct" : 4,
- "OnFunctCd" : 0,
+ "orName" : 2, # noqa: F601
+ "orDesc" : 3, # noqa: F601
+ "orFunct" : 4, # noqa: F601
+ "OnFunctCd" : 0, # noqa: F601
}
import json
@@ -576,14 +578,16 @@ def script_tool(project_gdb=""):
arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
- except:
+ except: # noqa: E722
arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
else:
# While in development, leave here. For test, move to finally
rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
return True
finally:
pass
@@ -593,7 +597,7 @@ def script_tool(project_gdb=""):
project_gdb = arcpy.GetParameterAsText(0)
if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025\August 1 2025.gdb"
+ project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\February 1 2026\February 1 2026.gdb"
else:
pass
@@ -603,7 +607,7 @@ def script_tool(project_gdb=""):
except SystemExit:
pass
- except:
+ except: # noqa: E722
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
else:
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/create_mosaics_director.py b/ArcGIS-Analysis-Python/src/dismap_tools/create_mosaics_director.py
index 6dc1cfb..38542fe 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/create_mosaics_director.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/create_mosaics_director.py
@@ -9,17 +9,137 @@
# Copyright: (c) john.f.kennedy 2024
# Licence:
#-------------------------------------------------------------------------------
-import os, sys # built-ins first
+import os
+import sys
import traceback
-import inspect
import arcpy # third-parties second
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ #filename = sys.path[0] + os.sep + f"{os.path.basename(__file__)}"
+ filename = os.path.basename(__file__)
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def preprocessing(project_gdb="", table_names="", clear_folder=True):
+ try:
+ import dismap_tools
+
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic arcpy.env variables
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ # Set varaibales
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{project_folder}\Scratch"
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
+
+ # Clear Scratch Folder
+ #ClearScratchFolder = True
+ #if ClearScratchFolder:
+ if clear_folder:
+ dismap_tools.clear_folder(folder=rf"{os.path.dirname(project_gdb)}\Scratch")
+ else:
+ pass
+ #del ClearScratchFolder
+ del clear_folder
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ del project_folder, scratch_workspace
+
+ if not table_names:
+ table_names = [row[0] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"),
+ "TableName",
+ where_clause = "TableName LIKE '%_IDW'")]
+ else:
+ pass
+
+ for table_name in table_names:
+ arcpy.AddMessage(f"Pre-Processing: {table_name}")
+
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ region_scratch_workspace = os.path.join(scratch_folder, f"{table_name}", "scratch.gdb")
+
+ # Create Scratch Workspace for Region
+ if not arcpy.Exists(region_scratch_workspace):
+ os.makedirs(os.path.join(scratch_folder, table_name))
+ if not arcpy.Exists(region_scratch_workspace):
+ arcpy.AddMessage(f"Create File GDB: '{table_name}'")
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, f"{table_name}"), "scratch")
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del region_scratch_workspace
+ # # # CreateFileGDB
+ arcpy.AddMessage(f"Creating File GDB: '{table_name}'")
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # # # CreateFileGDB
+ # # # Datasets
+ # Process: Make Table View (Make Table View) (management)
+ datasets = rf'{project_gdb}\Datasets'
+ arcpy.AddMessage(f"'{os.path.basename(datasets)}' has {arcpy.management.GetCount(datasets)[0]} records")
+ arcpy.management.Copy(datasets, rf"{region_gdb}\Datasets")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # # # Datasets
+
+ # # # LayerSpeciesYearImageName
+ LayerSpeciesYearImageName = rf"{project_gdb}\{table_name}_LayerSpeciesYearImageName"
+ arcpy.AddMessage(f"The table '{table_name}_LayerSpeciesYearImageName' has {arcpy.management.GetCount(LayerSpeciesYearImageName)[0]} records")
+ arcpy.management.Copy(rf"{project_gdb}\{table_name}_LayerSpeciesYearImageName", rf"{region_gdb}\{table_name}_LayerSpeciesYearImageName")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del LayerSpeciesYearImageName
+ # # # LayerSpeciesYearImageName
+
+ # # # Raster_Mask
+ arcpy.AddMessage(f"Copy Raster Mask for '{table_name}'")
+ arcpy.management.Copy(rf"{project_gdb}\{table_name}_Raster_Mask", rf"{region_gdb}\{table_name}_Raster_Mask")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # # # Raster_Mask
+
+ del datasets
+ # Declared Variables
+ del table_name
+
+ # Declared Variables
+ del scratch_folder, region_gdb
+ # Imports
+ del dismap_tools
+ # Function Parameters
+ del project_gdb, table_names
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
def director(project_gdb="", Sequential=True, table_names=[]):
try:
# Imports
import dismap_tools
- from create_mosaics_worker import preprocessing, worker
+ from create_mosaics_worker import worker
# Test if passed workspace exists, if not sys.exit()
if not arcpy.Exists(rf"{project_gdb}"):
@@ -51,11 +171,11 @@ def director(project_gdb="", Sequential=True, table_names=[]):
# Sequential Processing
if Sequential:
- arcpy.AddMessage(f"Sequential Processing")
+ arcpy.AddMessage("Sequential Processing")
for i in range(0, len(table_names)):
arcpy.AddMessage(f"Processing: {table_names[i]}")
table_name = table_names[i]
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
try:
worker(region_gdb=region_gdb)
except SystemExit:
@@ -69,11 +189,11 @@ def director(project_gdb="", Sequential=True, table_names=[]):
# Non-Sequential Processing
if not Sequential:
- arcpy.AddMessage(f"Non-Sequential Processing")
+ arcpy.AddMessage("Non-Sequential Processing")
# Imports
import multiprocessing
from time import time, localtime, strftime, sleep, gmtime
- arcpy.AddMessage(f"Start multiprocessing using the ArcGIS Pro pythonw.exe.")
+ arcpy.AddMessage("Start multiprocessing using the ArcGIS Pro pythonw.exe.")
#Set multiprocessing exe in case we're running as an embedded process, i.e ArcGIS
#get_install_path() uses a registry query to figure out 64bit python exe if available
multiprocessing.set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
@@ -84,17 +204,17 @@ def director(project_gdb="", Sequential=True, table_names=[]):
#Create a pool of workers, keep one cpu free for surfing the net.
#Let each worker process only handle 1 task before being restarted (in case of nasty memory leaks)
with multiprocessing.Pool(processes=_processes, maxtasksperchild=1) as pool:
- arcpy.AddMessage(f"\tPrepare arguments for processing")
+ arcpy.AddMessage("\tPrepare arguments for processing")
# Use apply_async so we can handle exceptions gracefully
jobs={}
for i in range(0, len(table_names)):
try:
arcpy.AddMessage(f"Processing: {table_names[i]}")
table_name = table_names[i]
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
jobs[table_name] = pool.apply_async(worker, [region_gdb])
del table_name, region_gdb
- except:
+ except: # noqa: E722
pool.terminate()
traceback.print_exc()
sys.exit()
@@ -109,7 +229,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
end_time = time()
elapse_time = end_time - start_time
arcpy.AddMessage(f"\nStart Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
- arcpy.AddMessage(f"Have the workers finished?")
+ arcpy.AddMessage("Have the workers finished?")
finish_time = strftime('%a %b %d %I:%M %p', localtime())
time_elapsed = u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time)))
arcpy.AddMessage(f"It's {finish_time}\n{time_elapsed}")
@@ -122,7 +242,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
try:
# wait for and get the result from the task
result.get()
- except:
+ except: # noqa: E722
pool.terminate()
traceback.print_exc()
sys.exit()
@@ -140,11 +260,11 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del result_completed
del start_time
del all_finished
- arcpy.AddMessage(f"Close the process pool")
+ arcpy.AddMessage("Close the process pool")
# close the process pool
pool.close()
# wait for all tasks to complete and processes to close
- arcpy.AddMessage(f"\tWait for all tasks to complete and processes to close")
+ arcpy.AddMessage("\tWait for all tasks to complete and processes to close")
pool.join()
# Just in case
pool.terminate()
@@ -152,7 +272,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del jobs
del _processes
del time, multiprocessing, localtime, strftime, sleep, gmtime
- arcpy.AddMessage(f"Done with multiprocessing Pool\n")
+ arcpy.AddMessage("Done with multiprocessing Pool\n")
# Post-Processing
arcpy.AddMessage("Post-Processing Begins")
@@ -168,7 +288,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del dirpath, dirnames, filenames
del walk
for dataset in datasets:
- datasets_short_path = f"{os.path.basename(os.path.dirname(os.path.dirname(dataset)))}\{os.path.basename(os.path.dirname(dataset))}\{os.path.basename(dataset)}"
+ datasets_short_path = f".. {'/'.join(dataset.split(os.sep)[-4:])}"
dataset_name = os.path.basename(dataset)
dataset_type = arcpy.Describe(dataset).datatype
region_gdb = os.path.dirname(dataset)
@@ -231,40 +351,26 @@ def director(project_gdb="", Sequential=True, table_names=[]):
# Declared Variables assigned in function
del scratch_folder, csv_data_folder, crf_folder
# Imports
- del preprocessing, worker, dismap_tools
+ del worker, dismap_tools
# Function Parameters
del project_gdb, Sequential, table_names
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
- traceback.print_exc()
- sys.exit()
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
def script_tool(project_gdb=""):
try:
@@ -274,39 +380,36 @@ def script_tool(project_gdb=""):
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
arcpy.AddMessage(f"{'-' * 80}\n")
- # Clear Scratch Folder
- ClearScratchFolder = False
- if ClearScratchFolder:
- import dismap_tools
- dismap_tools.clear_folder(folder=scratch_folder)
- del dismap_tools
- else:
- pass
- del ClearScratchFolder
+## # Clear Scratch Folder
+## ClearScratchFolder = False
+## if ClearScratchFolder:
+## import dismap_tools
+## dismap_tools.clear_folder(folder=scratch_folder)
+## del dismap_tools
+## else:
+## pass
+## del ClearScratchFolder
try:
# "AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW", "NEUS_FAL_IDW", "NEUS_SPR_IDW",
# "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",
- Test = True
+ Test = False
if Test:
- director(project_gdb=project_gdb, Sequential=True, table_names=["SEUS_FAL_IDW"])
+ director(project_gdb=project_gdb, Sequential=True, table_names=["SEUS_FAL_IDW", "HI_IDW", "NBS_IDW",])
elif not Test:
- #director(project_gdb=project_gdb, Sequential=False, table_names=["NBS_IDW", "ENBS_IDW", "HI_IDW"])
- #director(project_gdb=project_gdb, Sequential=True, table_names=["SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW",])
- #director(project_gdb=project_gdb, Sequential=False, table_names=["WC_TRI_IDW", "AI_IDW", "GMEX_IDW"])
- #director(project_gdb=project_gdb, Sequential=False, table_names=["GOA_IDW", "WC_ANN_IDW", "NEUS_FAL_IDW",])
- director(project_gdb=project_gdb, Sequential=True, table_names=["NEUS_FAL_IDW", "NEUS_SPR_IDW"])
+ director(project_gdb=project_gdb, Sequential=True, table_names=["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW",])
+ director(project_gdb=project_gdb, Sequential=True, table_names=["NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",])
else:
pass
del Test
- except:
+ except: # noqa: E722
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
sys.exit()
@@ -332,51 +435,43 @@ def script_tool(project_gdb=""):
del elapse_time, end_time, start_time
del gmtime, localtime, strftime, time
- except KeyboardInterrupt:
- arcpy.AddError(f"Caught an KeyboardInterrupt in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}.")
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}.")
- arcpy.AddWarning(arcpy.GetMessages(1))
- traceback.print_exc()
- sys.exit()
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: '{se}' in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: '{e}' in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- if "Test" in locals().keys(): del Test
if __name__ == '__main__':
try:
project_gdb = arcpy.GetParameterAsText(0)
if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025\August 1 2025.gdb"
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
else:
pass
script_tool(project_gdb)
arcpy.SetParameterAsText(1, "Result")
del project_gdb
- except:
- traceback.print_exc()
- else:
- pass
- finally:
- pass
\ No newline at end of file
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/create_mosaics_worker.py b/ArcGIS-Analysis-Python/src/dismap_tools/create_mosaics_worker.py
index 59da677..12fc26c 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/create_mosaics_worker.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/create_mosaics_worker.py
@@ -9,21 +9,24 @@
# Copyright: (c) john.f.kennedy 2024
# Licence:
#-------------------------------------------------------------------------------
-import os, sys # built-ins first
+import os
+import sys
import traceback
-import inspect
import arcpy # third-parties second
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ #filename = sys.path[0] + os.sep + f"{os.path.basename(__file__)}"
+ filename = os.path.basename(__file__)
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
def worker(region_gdb=""):
try:
- # Test if passed workspace exists, if not sys.exit()
- if not arcpy.Exists(rf"{region_gdb}"):
- arcpy.AddError(f"{os.path.basename(region_gdb)} is missing!!")
- arcpy.AddError(f"Function: '{inspect.stack()[0][3]}', Line Number: {inspect.stack()[0][2]}")
- sys.exit()
- else:
- pass
# Set History and Metadata logs, set serverity and message level
arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
@@ -84,12 +87,12 @@ def worker(region_gdb=""):
#arcpy.env.outputCoordinateSystem = psr
#del geographic_area_sr, geographic_area
- arcpy.AddMessage(f"\tSet the 'outputCoordinateSystem' based on the projection information for the geographic region")
+ arcpy.AddMessage("\tSet the 'outputCoordinateSystem' based on the projection information for the geographic region")
psr = arcpy.Describe(region_raster_mask).spatialReference
arcpy.env.outputCoordinateSystem = psr
del region_raster_mask
- arcpy.AddMessage(f"Building the 'input_raster_paths' list")
+ arcpy.AddMessage("Building the 'input_raster_paths' list")
layerspeciesyearimagename = rf"{region_gdb}\{table_name}_LayerSpeciesYearImageName"
@@ -172,7 +175,7 @@ def worker(region_gdb=""):
try:
arcpy.management.RemoveIndex(mosaic_path, [f"{table_name}_MosaicSpeciesIndex",])
- except:
+ except: # noqa: E722
pass
arcpy.AddMessage(f"Adding field index to {os.path.basename(mosaic_path)}")
@@ -306,171 +309,34 @@ def worker(region_gdb=""):
# Function parameter
del region_gdb
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
- traceback.print_exc()
- sys.exit()
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
-
-def preprocessing(project_gdb="", table_names="", clear_folder=True):
- try:
- import dismap_tools
-
- arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
- arcpy.SetLogMetadata(True)
- arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
- # 1—If a tool produces a warning or an error, it will throw an exception.
- # 2—If a tool produces an error, it will throw an exception. This is the default.
- arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
-
- # Set basic arcpy.env variables
- arcpy.env.overwriteOutput = True
- arcpy.env.parallelProcessingFactor = "100%"
-
- # Set varaibales
- project_folder = os.path.dirname(project_gdb)
- scratch_folder = rf"{project_folder}\Scratch"
- scratch_workspace = rf"{project_folder}\Scratch\scratch.gdb"
-
- # Clear Scratch Folder
- #ClearScratchFolder = True
- #if ClearScratchFolder:
- if clear_folder:
- dismap_tools.clear_folder(folder=rf"{os.path.dirname(project_gdb)}\Scratch")
- else:
- pass
- #del ClearScratchFolder
- del clear_folder
-
- arcpy.env.workspace = project_gdb
- arcpy.env.scratchWorkspace = scratch_workspace
- del project_folder, scratch_workspace
-
- if not table_names:
- table_names = [row[0] for row in arcpy.da.SearchCursor(f"{project_gdb}\Datasets",
- "TableName",
- where_clause = "TableName LIKE '%_IDW'")]
- else:
- pass
-
- for table_name in table_names:
- arcpy.AddMessage(f"Pre-Processing: {table_name}")
-
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
- region_scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
-
- # Create Scratch Workspace for Region
- if not arcpy.Exists(region_scratch_workspace):
- os.makedirs(rf"{scratch_folder}\{table_name}")
- if not arcpy.Exists(region_scratch_workspace):
- arcpy.AddMessage(f"Create File GDB: '{table_name}'")
- arcpy.management.CreateFileGDB(rf"{scratch_folder}\{table_name}", f"scratch")
- arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- del region_scratch_workspace
- # # # CreateFileGDB
- arcpy.AddMessage(f"Creating File GDB: '{table_name}'")
- arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
- arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- # # # CreateFileGDB
- # # # Datasets
- # Process: Make Table View (Make Table View) (management)
- datasets = rf'{project_gdb}\Datasets'
- arcpy.AddMessage(f"'{os.path.basename(datasets)}' has {arcpy.management.GetCount(datasets)[0]} records")
- arcpy.management.Copy(datasets, rf"{region_gdb}\Datasets")
- arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- # # # Datasets
-
- # # # LayerSpeciesYearImageName
- LayerSpeciesYearImageName = rf"{project_gdb}\{table_name}_LayerSpeciesYearImageName"
- arcpy.AddMessage(f"The table '{table_name}_LayerSpeciesYearImageName' has {arcpy.management.GetCount(LayerSpeciesYearImageName)[0]} records")
- arcpy.management.Copy(rf"{project_gdb}\{table_name}_LayerSpeciesYearImageName", rf"{region_gdb}\{table_name}_LayerSpeciesYearImageName")
- arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- del LayerSpeciesYearImageName
- # # # LayerSpeciesYearImageName
-
- # # # Raster_Mask
- arcpy.AddMessage(f"Copy Raster Mask for '{table_name}'")
- arcpy.management.Copy(rf"{project_gdb}\{table_name}_Raster_Mask", rf"{region_gdb}\{table_name}_Raster_Mask")
- arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- # # # Raster_Mask
-
- del datasets
- # Declared Variables
- del table_name
-
- # Declared Variables
- del scratch_folder, region_gdb
- # Imports
- del dismap_tools
- # Function Parameters
- del project_gdb, table_names
-
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
- traceback.print_exc()
- sys.exit()
- except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
- return True
- finally:
- pass
def script_tool(project_gdb=""):
try:
# Imports
import dismap_tools
+ from create_mosaics_director import preprocessing
from time import gmtime, localtime, strftime, time
# Set a start time so that we can see how log things take
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
@@ -519,48 +385,46 @@ def script_tool(project_gdb=""):
del elapse_time, end_time, start_time
del gmtime, localtime, strftime, time
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
if __name__ == '__main__':
try:
project_gdb = arcpy.GetParameterAsText(0)
+
if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025\August 1 2025.gdb"
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
else:
pass
script_tool(project_gdb)
+
arcpy.SetParameterAsText(1, "Result")
+
del project_gdb
- except:
- traceback.print_exc()
- else:
- pass
- finally:
- pass
\ No newline at end of file
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/create_rasters_director.py b/ArcGIS-Analysis-Python/src/dismap_tools/create_rasters_director.py
index c201cd1..358c9a7 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/create_rasters_director.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/create_rasters_director.py
@@ -9,16 +9,155 @@
# Copyright: (c) john.f.kennedy 2024
# Licence:
#-------------------------------------------------------------------------------
-import os, sys # built-ins first
+import os
+import sys
import traceback
-import inspect
-
import arcpy # third-parties second
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + f"{__file__}"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def preprocessing(project_gdb="", table_names="", clear_folder=True):
+ try:
+ import dismap_tools
+
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic arcpy.env variables
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ # Set varaibales
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{project_folder}\Scratch"
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
+
+ # Clear Scratch Folder
+ #ClearScratchFolder = True
+ #if ClearScratchFolder:
+ if clear_folder:
+ dismap_tools.clear_folder(folder=scratch_folder)
+ else:
+ pass
+ #del ClearScratchFolder
+ del clear_folder
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ del project_folder, scratch_workspace
+
+ if not table_names:
+ table_names = [row[0] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"),
+ "TableName",
+ where_clause = "TableName LIKE '%_IDW'")]
+ else:
+ pass
+
+ for table_name in table_names:
+ arcpy.AddMessage(f"Pre-Processing: {table_name}")
+
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ region_scratch_workspace = os.path.join(scratch_folder, f"{table_name}", "scratch.gdb")
+
+ # Create Scratch Workspace for Region
+ if not arcpy.Exists(region_scratch_workspace):
+ os.makedirs(os.path.join(scratch_folder, table_name))
+ if not arcpy.Exists(region_scratch_workspace):
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, f"{table_name}"), "scratch")
+ del region_scratch_workspace
+
+ sample_locations = rf"{table_name}_Sample_Locations"
+
+ arcpy.AddMessage(f"Creating File GDB: {table_name}")
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+
+ # Process: Make Table View (Make Table View) (management)
+ datasets = rf'{project_gdb}\Datasets'
+ arcpy.AddMessage(f"\t{os.path.basename(datasets)} has {arcpy.management.GetCount(datasets)[0]} records")
+
+ table_name_view = "Dataset Table View"
+ arcpy.management.MakeTableView(in_table = datasets,
+ out_view = table_name_view,
+ where_clause = f"TableName = '{table_name}'"
+ )
+ arcpy.AddMessage(f"\tThe table {table_name_view} has {arcpy.management.GetCount(table_name_view)[0]} records")
+ arcpy.management.CopyRows(table_name_view, rf"{region_gdb}\Datasets")
+ arcpy.AddMessage("\tCopy Rows: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ filter_region = [row[0] for row in arcpy.da.SearchCursor(rf"{region_gdb}\Datasets", "FilterRegion")][0].replace("'", "''")
+ filter_subregion = [row[0] for row in arcpy.da.SearchCursor(rf"{region_gdb}\Datasets", "FilterSubRegion")][0].replace("'", "''")
+
+ arcpy.management.Delete(table_name_view)
+ del table_name_view
+
+ arcpy.AddMessage(f"Copying: The table {table_name}_LayerSpeciesYearImageName")
+ arcpy.management.Copy(rf"{project_gdb}\{table_name}_LayerSpeciesYearImageName", rf"{region_gdb}\{table_name}_LayerSpeciesYearImageName")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ #
+ arcpy.AddMessage("Make Feature Layer (Make Feature Layer) (management)")
+ # Process: Make Feature Layer (Make Feature Layer) (management)
+ idw_lyr = arcpy.management.MakeFeatureLayer( in_features = rf"{project_gdb}\{sample_locations}",
+ out_layer = "IDW_Sample_Locations_Layer",
+ where_clause = "DistributionProjectName = 'NMFS/Rutgers IDW Interpolation'",
+ workspace = "",
+ field_info = "OBJECTID OBJECTID VISIBLE NONE;Shape Shape VISIBLE NONE;DatasetCode DatasetCode VISIBLE NONE;Region Region VISIBLE NONE;Season Season VISIBLE NONE;DistributionProjectName DistributionProjectName VISIBLE NONE;SummaryProduct SummaryProduct VISIBLE NONE;SampleID SampleID VISIBLE NONE;Year Year VISIBLE NONE;StdTime StdTime VISIBLE NONE;Species Species VISIBLE NONE;WTCPUE WTCPUE VISIBLE NONE;MapValue MapValue VISIBLE NONE;TransformUnit TransformUnit VISIBLE NONE;CommonName CommonName VISIBLE NONE;SpeciesCommonName SpeciesCommonName VISIBLE NONE;CommonNameSpecies CommonNameSpecies VISIBLE NONE;CoreSpecies CoreSpecies VISIBLE NONE;Stratum Stratum VISIBLE NONE;StratumArea StratumArea VISIBLE NONE;Latitude Latitude VISIBLE NONE;Longitude Longitude VISIBLE NONE;Depth Depth VISIBLE NONE"
+ )
+
+ arcpy.AddMessage("Copy Features (Copy Features) (management)")
+ arcpy.management.CopyFeatures(idw_lyr, rf"{region_gdb}\{sample_locations}")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ #
+ arcpy.AddMessage("Copy")
+ arcpy.management.Copy(rf"{project_gdb}\{table_name}_Raster_Mask", rf"{region_gdb}\{table_name}_Raster_Mask")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.management.Delete(idw_lyr)
+ del idw_lyr
+
+ del sample_locations
+ del region_gdb, table_name
+ del datasets, filter_region, filter_subregion
+ # Declared Variables
+ del scratch_folder
+ # Imports
+ del dismap_tools
+ # Function Parameters
+ del project_gdb, table_names
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
def director(project_gdb="", Sequential=True, table_names=[]):
try:
- from create_rasters_worker import preprocessing, worker
+ from create_rasters_worker import worker
# Test if passed workspace exists, if not sys.exit()
if not arcpy.Exists(rf"{project_gdb}"):
@@ -39,11 +178,15 @@ def director(project_gdb="", Sequential=True, table_names=[]):
arcpy.env.overwriteOutput = True
arcpy.env.parallelProcessingFactor = "100%"
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = os.path.join(project_folder, "Scratch")
+ del project_folder
+
preprocessing(project_gdb=project_gdb, table_names=table_names, clear_folder=True)
# Sequential Processing
if Sequential:
- arcpy.AddMessage(f"Sequential Processing")
+ arcpy.AddMessage("Sequential Processing")
for i in range(0, len(table_names)):
arcpy.AddMessage(f"Processing: {table_names[i]}")
table_name = table_names[i]
@@ -51,7 +194,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
try:
pass
worker(region_gdb=region_gdb)
- except:
+ except: # noqa: E722
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
sys.exit()
@@ -64,7 +207,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
if not Sequential:
import multiprocessing
from time import time, localtime, strftime, sleep, gmtime
- arcpy.AddMessage(f"Start multiprocessing using the ArcGIS Pro pythonw.exe.")
+ arcpy.AddMessage("Start multiprocessing using the ArcGIS Pro pythonw.exe.")
#Set multiprocessing exe in case we're running as an embedded process, i.e ArcGIS
#get_install_path() uses a registry query to figure out 64bit python exe if available
multiprocessing.set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
@@ -75,17 +218,17 @@ def director(project_gdb="", Sequential=True, table_names=[]):
#Create a pool of workers, keep one cpu free for surfing the net.
#Let each worker process only handle 1 task before being restarted (in case of nasty memory leaks)
with multiprocessing.Pool(processes=_processes, maxtasksperchild=1) as pool:
- arcpy.AddMessage(f"\tPrepare arguments for processing")
+ arcpy.AddMessage("\tPrepare arguments for processing")
# Use apply_async so we can handle exceptions gracefully
jobs={}
for i in range(0, len(table_names)):
try:
- arcpy.AddMessage(f"Processing: {tablenames[i]}")
+ arcpy.AddMessage(f"Processing: {table_names[i]}")
table_name = table_names[i]
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
jobs[table_name] = pool.apply_async(worker, [region_gdb])
del table_name, region_gdb
- except:
+ except: # noqa: E722
pool.terminate()
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
@@ -101,7 +244,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
end_time = time()
elapse_time = end_time - start_time
arcpy.AddMessage(f"\nStart Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
- arcpy.AddMessage(f"Have the workers finished?")
+ arcpy.AddMessage("Have the workers finished?")
finish_time = strftime('%a %b %d %I:%M %p', localtime())
time_elapsed = u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time)))
arcpy.AddMessage(f"It's {finish_time}\n{time_elapsed}")
@@ -114,7 +257,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
try:
# wait for and get the result from the task
result.get()
- except:
+ except: # noqa: E722
pool.terminate()
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
@@ -133,11 +276,11 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del result_completed
del start_time
del all_finished
- arcpy.AddMessage(f"\tClose the process pool")
+ arcpy.AddMessage("\tClose the process pool")
# close the process pool
pool.close()
# wait for all tasks to complete and processes to close
- arcpy.AddMessage(f"\tWait for all tasks to complete and processes to close")
+ arcpy.AddMessage("\tWait for all tasks to complete and processes to close")
pool.join()
# Just in case
pool.terminate()
@@ -145,47 +288,39 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del jobs
del _processes
del time, multiprocessing, localtime, strftime, sleep, gmtime
- arcpy.AddMessage(f"\tDone with multiprocessing Pool")
+ arcpy.AddMessage("\tDone with multiprocessing Pool")
# No Post-Processing
arcpy.AddMessage(f"Compacting the {os.path.basename(project_gdb)} GDB")
arcpy.management.Compact(project_gdb)
arcpy.AddMessage("\t"+arcpy.GetMessages(0).replace("\n", "\n\t"))
+
# Declared Variables assigned in function
del scratch_folder
+
# Imports
- del preprocessing, worker
+ del worker
+
# Function Parameters
del project_gdb, Sequential, table_names
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
+
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
def script_tool(project_gdb=""):
try:
@@ -195,7 +330,7 @@ def script_tool(project_gdb=""):
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
@@ -208,21 +343,14 @@ def script_tool(project_gdb=""):
Test = False
if Test:
director(project_gdb=project_gdb, Sequential=True, table_names=["HI_IDW", "AI_IDW",])
-
elif not Test:
- #
- #director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", ])
- #director(project_gdb=project_gdb, Sequential=False, table_names=["EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "NBS_IDW", ])
- #director(project_gdb=project_gdb, Sequential=False, table_names=["HI_IDW",])
- #director(project_gdb=project_gdb, Sequential=False, table_names=[ "WC_ANN_IDW", "WC_TRI_IDW",])
- #director(project_gdb=project_gdb, Sequential=False, table_names=["SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW",])
- director(project_gdb=project_gdb, Sequential=False, table_names=["NEUS_FAL_IDW", "NEUS_SPR_IDW", ])
- #director(project_gdb=project_gdb, Sequential=False, table_names=[])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW",])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",])
else:
pass
del Test
- except:
+ except: # noqa: E722
pass
#arcpy.AddError(arcpy.GetMessages(2))
#traceback.print_exc()
@@ -248,46 +376,43 @@ def script_tool(project_gdb=""):
del elapse_time, end_time, start_time
del gmtime, localtime, strftime, time
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
if __name__ == '__main__':
try:
project_gdb = arcpy.GetParameterAsText(0)
if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025\August 1 2025.gdb"
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
else:
pass
script_tool(project_gdb)
arcpy.SetParameterAsText(1, "Result")
del project_gdb
- except:
- traceback.print_exc()
- else:
- pass
- finally:
- pass
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/create_rasters_worker.py b/ArcGIS-Analysis-Python/src/dismap_tools/create_rasters_worker.py
index 6aabe61..698d66e 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/create_rasters_worker.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/create_rasters_worker.py
@@ -9,13 +9,22 @@
# Copyright: (c) john.f.kennedy 2024
# Licence:
#-------------------------------------------------------------------------------
-import os, sys # built-ins first
+import os
+import sys
import traceback
-import inspect
-
import arcpy # third-parties second
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ #filename = sys.path[0] + os.sep + f"{os.path.basename(__file__)}"
+ filename = os.path.basename(__file__)
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
def print_table(table=""):
try:
""" Print first 5 rows of a table """
@@ -30,10 +39,22 @@ def print_table(table=""):
del row
del desc, fields, oid
del table
- except:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
def worker(region_gdb=""):
try:
@@ -90,9 +111,9 @@ def worker(region_gdb=""):
# Assigning variables from items in the chosen table list
# ['AI_IDW', 'AI_IDW_Region', 'AI', 'Aleutian Islands', None, 'IDW']
table_name = region_list[0]
- geographic_area = region_list[1]
+ #geographic_area = region_list[1]
datasetcode = region_list[2]
- cell_size = region_list[3] if type(region_list[3]) != "str" else int(region_list[3])
+ cell_size = region_list[3] if not isinstance(region_list[3], type("str")) else int(region_list[3])
region = region_list[4]
season = region_list[5]
distri_code = region_list[6]
@@ -188,18 +209,22 @@ def worker(region_gdb=""):
msg = msg + f"\t\t\tSpecies: {species}\n"
msg = msg + f"\t\t\tYear: {year}\n"
msg = msg + f"\t\t\tOutput Raster: {os.path.basename(output_raster_path)}\n"
- arcpy.AddMessage(msg); del msg
+ arcpy.AddMessage(msg)
+ del msg
- arcpy.AddMessage(f'\t\t\tSelect Layer by Attribute: "CLEAR_SELECTION"')
+ arcpy.AddMessage('\t\t\tSelect Layer by Attribute: "CLEAR_SELECTION"')
arcpy.management.SelectLayerByAttribute( sample_locations_path_layer, "CLEAR_SELECTION" )
arcpy.AddMessage(f"\t\t\tSelect Layer by Attribute: Species = '{species}' AND Year = {year}")
# Select for species and year
- arcpy.management.SelectLayerByAttribute( sample_locations_path_layer,
- "NEW_SELECTION",
- f"Species = '{species}' AND Year = {year}"
+ #print(sample_locations_path_layer)
+ #print(f"Species = '{species}' AND Year = {year}")
+ arcpy.management.SelectLayerByAttribute( in_layer_or_view = sample_locations_path_layer,
+ selection_type = "NEW_SELECTION",
+ where_clause = f"Species = '{species}' And Year = {year}",
+ invert_where_clause=None
)
# Get the count of records for selected species
@@ -211,7 +236,7 @@ def worker(region_gdb=""):
#if summary_product == "Yes":
- arcpy.AddMessage(f"\t\t\tProcessing IDW")
+ arcpy.AddMessage("\t\t\tProcessing IDW")
# Select weighted years
arcpy.management.SelectLayerByAttribute( sample_locations_path_layer,
@@ -336,188 +361,34 @@ def worker(region_gdb=""):
# Function parameter
del region_gdb
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
- except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
- return True
- finally:
- pass
-
-def preprocessing(project_gdb="", table_names="", clear_folder=True):
- try:
- import dismap_tools
-
- arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
- arcpy.SetLogMetadata(True)
- arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
- # 1—If a tool produces a warning or an error, it will throw an exception.
- # 2—If a tool produces an error, it will throw an exception. This is the default.
- arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
-
- # Set basic arcpy.env variables
- arcpy.env.overwriteOutput = True
- arcpy.env.parallelProcessingFactor = "100%"
-
- # Set varaibales
- project_folder = os.path.dirname(project_gdb)
- scratch_folder = rf"{project_folder}\Scratch"
- scratch_workspace = rf"{project_folder}\Scratch\scratch.gdb"
-
- # Clear Scratch Folder
- #ClearScratchFolder = True
- #if ClearScratchFolder:
- if clear_folder:
- dismap_tools.clear_folder(folder=scratch_folder)
- else:
- pass
- #del ClearScratchFolder
- del clear_folder
-
- arcpy.env.workspace = project_gdb
- arcpy.env.scratchWorkspace = scratch_workspace
- del project_folder, scratch_workspace
-
- if not table_names:
- table_names = [row[0] for row in arcpy.da.SearchCursor(f"{project_gdb}\Datasets",
- "TableName",
- where_clause = "TableName LIKE '%_IDW'")]
- else:
- pass
-
- for table_name in table_names:
- arcpy.AddMessage(f"Pre-Processing: {table_name}")
-
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
- region_scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
-
- # Create Scratch Workspace for Region
- if not arcpy.Exists(region_scratch_workspace):
- os.makedirs(rf"{scratch_folder}\{table_name}")
- if not arcpy.Exists(region_scratch_workspace):
- arcpy.management.CreateFileGDB(rf"{scratch_folder}\{table_name}", f"scratch")
- del region_scratch_workspace
-
- sample_locations = rf"{table_name}_Sample_Locations"
-
- arcpy.AddMessage(f"Creating File GDB: {table_name}")
- arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
- arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
-
-
- # Process: Make Table View (Make Table View) (management)
- datasets = rf'{project_gdb}\Datasets'
- arcpy.AddMessage(f"\t{os.path.basename(datasets)} has {arcpy.management.GetCount(datasets)[0]} records")
-
- table_name_view = "Dataset Table View"
- arcpy.management.MakeTableView(in_table = datasets,
- out_view = table_name_view,
- where_clause = f"TableName = '{table_name}'"
- )
- arcpy.AddMessage(f"\tThe table {table_name_view} has {arcpy.management.GetCount(table_name_view)[0]} records")
- arcpy.management.CopyRows(table_name_view, rf"{region_gdb}\Datasets")
- arcpy.AddMessage("\tCopy Rows: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
-
- filter_region = [row[0] for row in arcpy.da.SearchCursor(rf"{region_gdb}\Datasets", "FilterRegion")][0].replace("'", "''")
- filter_subregion = [row[0] for row in arcpy.da.SearchCursor(rf"{region_gdb}\Datasets", "FilterSubRegion")][0].replace("'", "''")
-
- arcpy.management.Delete(table_name_view)
- del table_name_view
-
- arcpy.AddMessage(f"Copying: The table {table_name}_LayerSpeciesYearImageName")
- arcpy.management.Copy(rf"{project_gdb}\{table_name}_LayerSpeciesYearImageName", rf"{region_gdb}\{table_name}_LayerSpeciesYearImageName")
- arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- #
- arcpy.AddMessage(f"Make Feature Layer (Make Feature Layer) (management)")
- # Process: Make Feature Layer (Make Feature Layer) (management)
- idw_lyr = arcpy.management.MakeFeatureLayer( in_features = rf"{project_gdb}\{sample_locations}",
- out_layer = "IDW_Sample_Locations_Layer",
- where_clause = "DistributionProjectName = 'NMFS/Rutgers IDW Interpolation'",
- workspace = "",
- field_info = "OBJECTID OBJECTID VISIBLE NONE;Shape Shape VISIBLE NONE;DatasetCode DatasetCode VISIBLE NONE;Region Region VISIBLE NONE;Season Season VISIBLE NONE;DistributionProjectName DistributionProjectName VISIBLE NONE;SummaryProduct SummaryProduct VISIBLE NONE;SampleID SampleID VISIBLE NONE;Year Year VISIBLE NONE;StdTime StdTime VISIBLE NONE;Species Species VISIBLE NONE;WTCPUE WTCPUE VISIBLE NONE;MapValue MapValue VISIBLE NONE;TransformUnit TransformUnit VISIBLE NONE;CommonName CommonName VISIBLE NONE;SpeciesCommonName SpeciesCommonName VISIBLE NONE;CommonNameSpecies CommonNameSpecies VISIBLE NONE;CoreSpecies CoreSpecies VISIBLE NONE;Stratum Stratum VISIBLE NONE;StratumArea StratumArea VISIBLE NONE;Latitude Latitude VISIBLE NONE;Longitude Longitude VISIBLE NONE;Depth Depth VISIBLE NONE"
- )
-
- arcpy.AddMessage(f"Copy Features (Copy Features) (management)")
- arcpy.management.CopyFeatures(idw_lyr, rf"{region_gdb}\{sample_locations}")
- arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- #
- arcpy.AddMessage(f"Copy")
- arcpy.management.Copy(rf"{project_gdb}\{table_name}_Raster_Mask", rf"{region_gdb}\{table_name}_Raster_Mask")
- arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
-
- arcpy.management.Delete(idw_lyr)
- del idw_lyr
-
- del sample_locations
- del region_gdb, table_name
- del datasets, filter_region, filter_subregion
- # Declared Variables
- del scratch_folder
- # Imports
- del dismap_tools
- # Function Parameters
- del project_gdb, table_names
-
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
def script_tool(project_gdb=""):
try:
# Imports
import dismap_tools
+ from create_rasters_director import preprocessing
from time import gmtime, localtime, strftime, time
# Set a start time so that we can see how log things take
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
@@ -558,7 +429,7 @@ def script_tool(project_gdb=""):
# Declared Varaiables
# Imports
- del dismap_tools
+ del dismap_tools, preprocessing
# Function Parameters
del project_gdb
# Elapsed time
@@ -576,48 +447,43 @@ def script_tool(project_gdb=""):
del elapse_time, end_time, start_time
del gmtime, localtime, strftime, time
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
if __name__ == '__main__':
try:
project_gdb = arcpy.GetParameterAsText(0)
if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025\August 1 2025.gdb"
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
else:
pass
script_tool(project_gdb)
arcpy.SetParameterAsText(1, "Result")
del project_gdb
- except:
- traceback.print_exc()
- else:
- pass
- finally:
- pass
\ No newline at end of file
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/create_region_bathymetry_director.py b/ArcGIS-Analysis-Python/src/dismap_tools/create_region_bathymetry_director.py
index 0449423..0b5519d 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/create_region_bathymetry_director.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/create_region_bathymetry_director.py
@@ -9,20 +9,153 @@
# Copyright: (c) john.f.kennedy 2024
# Licence:
#-------------------------------------------------------------------------------
-import os, sys # built-ins first
+import os
+import sys # built-ins first
import traceback
-import inspect
import arcpy # third-parties second
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def preprocessing(project_gdb="", table_names="", clear_folder=True):
+ try:
+ import dismap_tools
+
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic arcpy.env variables
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ # Set varaibales
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = os.path.join(project_folder, "Scratch")
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
+ csv_data_folder = os.path.join(project_folder, "CSV_Data")
+ base_project_bathymetry_gdb = os.path.join(os.path.dirname(project_folder), "Bathymetry\\Bathymetry.gdb")
+
+## # Clear Scratch Folder
+## #ClearScratchFolder = True
+## #if ClearScratchFolder:
+## if clear_folder:
+## dismap_tools.clear_folder(folder = scratch_folder)
+## else:
+## pass
+## #del ClearScratchFolder
+## del clear_folder
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ del project_folder, scratch_workspace
+
+ if not table_names:
+ table_names = [row[0] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"), "TableName", where_clause = "TableName LIKE '%_IDW'")]
+ else:
+ pass
+
+ for table_name in table_names:
+ arcpy.AddMessage(f"Pre-Processing: {table_name}")
+
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ region_scratch_workspace = os.path.join(scratch_folder, f"{table_name}", "scratch.gdb")
+
+ # Create Scratch Workspace for Region
+ if not arcpy.Exists(region_scratch_workspace):
+ os.makedirs(os.path.join(scratch_folder, table_name))
+ if not arcpy.Exists(region_scratch_workspace):
+ arcpy.AddMessage(f"Create File GDB: '{table_name}'")
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, table_name), "scratch")
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del region_scratch_workspace
+ # # # CreateFileGDB
+ arcpy.AddMessage(f"Creating File GDB: '{table_name}'")
+ arcpy.management.CreateFileGDB(scratch_folder, table_name)
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # # # CreateFileGDB
+
+ # # # Datasets
+ # Process: Make Table View (Make Table View) (management)
+ datasets = rf'{project_gdb}\Datasets'
+ arcpy.AddMessage(f"'{os.path.basename(datasets)}' has {arcpy.management.GetCount(datasets)[0]} records")
+ arcpy.management.Copy(datasets, os.path.join(region_gdb, "Datasets"))
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del datasets
+ # # # Datasets
+
+ # # # Fishnet
+ region_fishnet = os.path.join(project_gdb, f"{table_name}_Fishnet")
+ arcpy.AddMessage(f"The table '{table_name}_Fishnet' has {arcpy.management.GetCount(region_fishnet)[0]} records")
+ arcpy.management.Copy(region_fishnet, os.path.join(region_gdb, f"{table_name}_Fishnet"))
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del region_fishnet
+ # # # Fishnet
+
+ # # # Raster_Mask
+ region_raster_mask = os.path.join(project_gdb, f"{table_name}_Raster_Mask")
+ arcpy.AddMessage(f"Copy Raster Mask for '{table_name}'")
+ #arcpy.management.Copy(os.path.join(project_gdb, f"{table_name}_Raster_Mask"), os.path.join(region_gdb, f"{table_name}_Raster_Mask"))
+ arcpy.management.CopyRaster(region_raster_mask, os.path.join(region_gdb, f"{table_name}_Raster_Mask"))
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del region_raster_mask
+ # # # Raster_Mask
+
+ # # # Bathymetry
+ base_fishnet_bathymetry = os.path.join(base_project_bathymetry_gdb, f"{table_name}_Bathymetry")
+ arcpy.AddMessage(f"Copy Bathymetry for '{table_name}'")
+ #arcpy.management.Copy(os.path.join(project_bathymetry_gdb, f"{table_name}_Bathymetry"), os.path.join(region_gdb, f"{table_name}_Fishnet_Bathymetry"))
+ arcpy.management.CopyRaster(base_fishnet_bathymetry, os.path.join(region_gdb, f"{table_name}_Fishnet_Bathymetry"))
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del base_fishnet_bathymetry
+ # # # Bathymetry
+
+ # Declared Variables
+ del table_name
+
+ # Declared Variables
+ del scratch_folder, region_gdb
+ del csv_data_folder, base_project_bathymetry_gdb
+ # Imports
+ del dismap_tools
+ # Function Parameters
+ del project_gdb, table_names
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
def director(project_gdb="", Sequential=True, table_names=[]):
try:
# Imports
import dismap_tools
- from create_region_bathymetry_worker import preprocessing, worker
+ from create_region_bathymetry_worker import worker
# Test if passed workspace exists, if not sys.exit()
- if not arcpy.Exists(rf"{project_gdb}"):
+ if not arcpy.Exists(project_gdb):
arcpy.AddError(f"{os.path.basename(project_gdb)} is missing!!")
arcpy.AddError(arcpy.GetMessages(2))
sys.exit()
@@ -52,68 +185,17 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del project_folder, scratch_workspace
-## if not table_names:
-## table_names = [row[0] for row in arcpy.da.SearchCursor(f"{project_gdb}\Datasets",
-## "TableName",
-## where_clause = "TableName LIKE '%_IDW'")]
-## else:
-## pass
-##
-## # Pre Processing
-## for table_name in table_names:
-## arcpy.AddMessage(f"Pre-Processing: {table_name}")
-##
-## region_gdb = rf"{scratch_folder}\{table_name}.gdb"
-## region_scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
-##
-## # Create Scratch Workspace for Region
-## if not arcpy.Exists(region_scratch_workspace):
-## os.makedirs(rf"{scratch_folder}\{table_name}")
-## if not arcpy.Exists(region_scratch_workspace):
-## arcpy.management.CreateFileGDB(rf"{scratch_folder}\{table_name}", f"scratch")
-## del region_scratch_workspace
-##
-## #datasets = [rf"{project_gdb}\{table_name}_Fishnet", ]
-## #if not any(arcpy.management.GetCount(d)[0] == 0 for d in datasets):
-##
-## if not arcpy.Exists(rf"{scratch_folder}\{table_name}.gdb"):
-## arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
-## arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
-## else:
-## pass
-## arcpy.management.Copy(rf"{project_gdb}\{table_name}_Fishnet", rf"{region_gdb}\{table_name}_Fishnet")
-## arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
-##
-## arcpy.management.CopyRaster(rf"{project_gdb}\{table_name}_Raster_Mask", rf"{region_gdb}\{table_name}_Raster_Mask")
-## arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
-##
-## arcpy.management.CopyRaster(rf"{project_bathymetry_gdb}\{table_name}_Bathymetry", rf"{region_gdb}\{table_name}_Fishnet_Bathymetry")
-## arcpy.AddMessage("\tCopy Raster: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
-##
-## #else:
-## # arcpy.AddWarning(f"One or more datasets contains zero records!!")
-## # for d in datasets:
-## # arcpy.AddMessage(f"\t{os.path.basename(d)} has {arcpy.management.GetCount(d)[0]} records")
-## # del d
-## # se = f"SystemExit at line number: '{traceback.extract_stack()[-1].lineno}'"
-## # sys.exit()(se)
-## #if "datasets" in locals().keys(): del datasets
-##
-## del region_gdb, table_name
-##
-## del project_bathymetry_gdb
-
# Sequential Processing
if Sequential:
- arcpy.AddMessage(f"Sequential Processing")
+ arcpy.AddMessage("Sequential Processing")
for i in range(0, len(table_names)):
arcpy.AddMessage(f"Processing: {table_names[i]}")
table_name = table_names[i]
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
try:
pass
- #worker(region_gdb=region_gdb)
- except:
+ worker(region_gdb=region_gdb)
+ except: # noqa: E722
traceback.print_exc()
del region_gdb, table_name
del i
@@ -124,7 +206,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
if not Sequential:
import multiprocessing
from time import time, localtime, strftime, sleep, gmtime
- arcpy.AddMessage(f"Start multiprocessing using the ArcGIS Pro pythonw.exe.")
+ arcpy.AddMessage("Sequential Processing")
#Set multiprocessing exe in case we're running as an embedded process, i.e ArcGIS
#get_install_path() uses a registry query to figure out 64bit python exe if available
multiprocessing.set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
@@ -135,17 +217,17 @@ def director(project_gdb="", Sequential=True, table_names=[]):
#Create a pool of workers, keep one cpu free for surfing the net.
#Let each worker process only handle 1 task before being restarted (in case of nasty memory leaks)
with multiprocessing.Pool(processes=_processes, maxtasksperchild=1) as pool:
- arcpy.AddMessage(f"\tPrepare arguments for processing")
+ arcpy.AddMessage("\tPrepare arguments for processing")
# Use apply_async so we can handle exceptions gracefully
jobs={}
for i in range(0, len(table_names)):
try:
arcpy.AddMessage(f"Processing: {table_names[i]}")
table_name = table_names[i]
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
jobs[table_name] = pool.apply_async(worker, [region_gdb])
del table_name, region_gdb
- except:
+ except: # noqa: E722
pool.terminate()
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
@@ -161,7 +243,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
end_time = time()
elapse_time = end_time - start_time
arcpy.AddMessage(f"\nStart Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
- arcpy.AddMessage(f"Have the workers finished?")
+ arcpy.AddMessage("Have the workers finished?")
finish_time = strftime('%a %b %d %I:%M %p', localtime())
time_elapsed = u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time)))
arcpy.AddMessage(f"It's {finish_time}\n{time_elapsed}")
@@ -179,7 +261,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
sys.exit()
- except:
+ except: # noqa: E722
pool.terminate()
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
@@ -198,11 +280,11 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del result_completed
del start_time
del all_finished
- arcpy.AddMessage(f"\tClose the process pool")
+ arcpy.AddMessage("\tClose the process pool")
# close the process pool
pool.close()
# wait for all tasks to complete and processes to close
- arcpy.AddMessage(f"\tWait for all tasks to complete and processes to close")
+ arcpy.AddMessage("\tWait for all tasks to complete and processes to close")
pool.join()
# Just in case
pool.terminate()
@@ -211,7 +293,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del _processes
del time, multiprocessing, localtime, strftime, sleep, gmtime
- arcpy.AddMessage(f"\tDone with multiprocessing Pool")
+ arcpy.AddMessage("\tDone with multiprocessing Pool")
# Post-Processing
arcpy.AddMessage("Post-Processing Begins")
@@ -228,7 +310,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del walk
for dataset in datasets:
- dataset_short_path = f"{os.path.basename(os.path.dirname(os.path.dirname(dataset)))}\{os.path.basename(os.path.dirname(dataset))}\{os.path.basename(dataset)}"
+ dataset_short_path = f"..{'/'.join(__file__.split(os.sep)[-4:])}"
#arcpy.AddMessage(fc_short_path)
dataset_name = os.path.basename(dataset)
region_gdb = os.path.dirname(dataset)
@@ -236,12 +318,12 @@ def director(project_gdb="", Sequential=True, table_names=[]):
arcpy.AddMessage(f"\t\tPath: '{dataset_short_path}'")
arcpy.AddMessage(f"\t\tRegion GDB: '{os.path.basename(region_gdb)}'")
- if arcpy.Exists(rf"{project_gdb}\{dataset_name}"):
- arcpy.management.Delete(rf"{project_gdb}\{dataset_name}")
- else:
- pass
+## if arcpy.Exists(rf"{project_gdb}\{dataset_name}"):
+## arcpy.management.Delete(rf"{project_gdb}\{dataset_name}")
+## else:
+## pass
- arcpy.management.Copy(dataset, rf"{project_gdb}\{dataset_name}")
+ arcpy.management.CopyRaster(dataset, rf"{project_gdb}\{dataset_name}")
arcpy.AddMessage("\tCopy: {0} {1}\n".format(f"{dataset_name}", arcpy.GetMessages(0).replace("\n", '\n\t')))
desc = arcpy.da.Describe(dataset)
@@ -258,41 +340,28 @@ def director(project_gdb="", Sequential=True, table_names=[]):
arcpy.AddMessage("\t"+arcpy.GetMessages(0).replace("\n", "\n\t"))
# Declared Variables
- del csv_data_folder, preprocessing, scratch_folder
+ del csv_data_folder, scratch_folder
# Imports
del dismap_tools, worker
# Function Parameters
del project_gdb, Sequential, table_names
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
- traceback.print_exc()
- sys.exit()
+
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
def script_tool(project_gdb=""):
try:
@@ -302,22 +371,12 @@ def script_tool(project_gdb=""):
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: ..{'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
arcpy.AddMessage(f"{'-' * 80}\n")
- # Clear Scratch Folder
- ClearScratchFolder = False
- if ClearScratchFolder:
- import dismap_tools
- dismap_tools.clear_folder(folder=scratch_folder)
- del dismap_tools
- else:
- pass
- del ClearScratchFolder
-
try:
# "AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW", "NEUS_FAL_IDW", "NEUS_SPR_IDW",
# "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",
@@ -326,12 +385,11 @@ def script_tool(project_gdb=""):
if Test:
director(project_gdb=project_gdb, Sequential=True, table_names=["HI_IDW"])
else:
- director(project_gdb=project_gdb, Sequential=False, table_names=["NBS_IDW", "ENBS_IDW", "HI_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW",])
- director(project_gdb=project_gdb, Sequential=False, table_names=["WC_TRI_IDW", "GMEX_IDW", "AI_IDW", "GOA_IDW", "WC_ANN_IDW", "NEUS_FAL_IDW",])
- director(project_gdb=project_gdb, Sequential=False, table_names=["NEUS_SPR_IDW", "EBS_IDW"])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW",])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",])
del Test
- except:
+ except: # noqa: E722
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
sys.exit()
@@ -356,50 +414,37 @@ def script_tool(project_gdb=""):
del elapse_time, end_time, start_time
del gmtime, localtime, strftime, time
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
- traceback.print_exc()
- sys.exit()
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- if "Test" in locals().keys(): del Test
if __name__ == '__main__':
try:
project_gdb = arcpy.GetParameterAsText(0)
if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025\August 1 2025.gdb"
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
else:
pass
script_tool(project_gdb)
arcpy.SetParameterAsText(1, "Result")
del project_gdb
- except:
- traceback.print_exc()
- else:
- pass
- finally:
- pass
\ No newline at end of file
+
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/create_region_bathymetry_worker.py b/ArcGIS-Analysis-Python/src/dismap_tools/create_region_bathymetry_worker.py
index 941b51c..a7b5ffc 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/create_region_bathymetry_worker.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/create_region_bathymetry_worker.py
@@ -9,12 +9,21 @@
# Copyright: (c) john.f.kennedy 2024
# Licence:
#-------------------------------------------------------------------------------
-import os, sys # built-ins first
+import os
+import sys
import traceback
-import inspect
import arcpy # third-parties second
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
def worker(region_gdb=""):
try:
# Test if passed workspace exists, if not sys.exit()
@@ -81,11 +90,11 @@ def worker(region_gdb=""):
arcpy.AddMessage(f"Processing: {table_name}")
# Input
- region_fishnet = rf"{region_gdb}\{table_name}_Fishnet"
- region_raster_mask = rf"{region_gdb}\{table_name}_Raster_Mask"
- region_fishnet_bathymetry = rf"{region_gdb}\{table_name}_Fishnet_Bathymetry"
+ region_fishnet = os.path.join(region_gdb, f"{table_name}_Fishnet")
+ region_raster_mask = os.path.join(region_gdb, f"{table_name}_Raster_Mask")
+ region_fishnet_bathymetry = os.path.join(region_gdb, f"{table_name}_Fishnet_Bathymetry")
# Output
- region_bathymetry = rf"{region_gdb}\{table_name}_Bathymetry"
+ region_bathymetry = os.path.join(region_gdb, f"{table_name}_Bathymetry")
# Get the reference system defined for the region in datasets
# Set the output coordinate system to what is needed for the
@@ -141,7 +150,7 @@ def worker(region_gdb=""):
del region_bathymetry
- arcpy.management.Delete(rf"{region_gdb}\Datasets")
+ arcpy.management.Delete(os.path.join(region_gdb, "Datasets"))
arcpy.AddMessage("\tDelete: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
arcpy.management.Delete(region_raster_mask)
arcpy.AddMessage("\tDelete: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
@@ -161,170 +170,22 @@ def worker(region_gdb=""):
# Function parameter
del region_gdb
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
- traceback.print_exc()
- sys.exit()
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
-
-def preprocessing(project_gdb="", table_names="", clear_folder=True):
- try:
- import dismap_tools
-
- arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
- arcpy.SetLogMetadata(True)
- arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
- # 1—If a tool produces a warning or an error, it will throw an exception.
- # 2—If a tool produces an error, it will throw an exception. This is the default.
- arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
-
- # Set basic arcpy.env variables
- arcpy.env.overwriteOutput = True
- arcpy.env.parallelProcessingFactor = "100%"
-
- # Set varaibales
- project_folder = os.path.dirname(project_gdb)
- scratch_folder = rf"{project_folder}\Scratch"
- scratch_workspace = rf"{project_folder}\Scratch\scratch.gdb"
- csv_data_folder = rf"{os.path.dirname(project_gdb)}\CSV_Data"
- project_bathymetry_gdb = rf"{project_folder}\Bathymetry\Bathymetry.gdb"
-
- # Clear Scratch Folder
- #ClearScratchFolder = True
- #if ClearScratchFolder:
- if clear_folder:
- dismap_tools.clear_folder(folder=rf"{os.path.dirname(project_gdb)}\Scratch")
- else:
- pass
- #del ClearScratchFolder
- del clear_folder
-
- arcpy.env.workspace = project_gdb
- arcpy.env.scratchWorkspace = scratch_workspace
- del project_folder, scratch_workspace
-
- if not table_names:
- table_names = [row[0] for row in arcpy.da.SearchCursor(f"{project_gdb}\Datasets",
- "TableName",
- where_clause = "TableName LIKE '%_IDW'")]
- else:
- pass
-
- for table_name in table_names:
- arcpy.AddMessage(f"Pre-Processing: {table_name}")
-
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
- region_scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
-
- # Create Scratch Workspace for Region
- if not arcpy.Exists(region_scratch_workspace):
- os.makedirs(rf"{scratch_folder}\{table_name}")
- if not arcpy.Exists(region_scratch_workspace):
- arcpy.AddMessage(f"Create File GDB: '{table_name}'")
- arcpy.management.CreateFileGDB(rf"{scratch_folder}\{table_name}", f"scratch")
- arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- del region_scratch_workspace
- # # # CreateFileGDB
- arcpy.AddMessage(f"Creating File GDB: '{table_name}'")
- arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
- arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- # # # CreateFileGDB
-
- # # # Datasets
- # Process: Make Table View (Make Table View) (management)
- datasets = rf'{project_gdb}\Datasets'
- arcpy.AddMessage(f"'{os.path.basename(datasets)}' has {arcpy.management.GetCount(datasets)[0]} records")
- arcpy.management.Copy(datasets, rf"{region_gdb}\Datasets")
- arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- del datasets
- # # # Datasets
-
- # # # Fishnet
- region_fishnet = rf"{project_gdb}\{table_name}_Fishnet"
- arcpy.AddMessage(f"The table '{table_name}_Fishnet' has {arcpy.management.GetCount(region_fishnet)[0]} records")
- arcpy.management.Copy(region_fishnet, rf"{region_gdb}\{table_name}_Fishnet")
- arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- del region_fishnet
- # # # Fishnet
-
- # # # Raster_Mask
- arcpy.AddMessage(f"Copy Raster Mask for '{table_name}'")
- arcpy.management.Copy(rf"{project_gdb}\{table_name}_Raster_Mask", rf"{region_gdb}\{table_name}_Raster_Mask")
- arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- # # # Raster_Mask
-
- # # # Bathymetry
- arcpy.AddMessage(f"Copy Bathymetry for '{table_name}'")
- arcpy.management.Copy(rf"{project_bathymetry_gdb}\{table_name}_Bathymetry", rf"{region_gdb}\{table_name}_Fishnet_Bathymetry")
- arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- # # # Bathymetry
-
- # Declared Variables
- del table_name
-
- # Declared Variables
- del scratch_folder, region_gdb
- del csv_data_folder, project_bathymetry_gdb
- # Imports
- del dismap_tools
- # Function Parameters
- del project_gdb, table_names
-
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
- traceback.print_exc()
- sys.exit()
- except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
- return True
- finally:
- pass
def script_tool(project_gdb=""):
try:
@@ -333,7 +194,7 @@ def script_tool(project_gdb=""):
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
@@ -345,10 +206,14 @@ def script_tool(project_gdb=""):
## #table_name = "NBS_IDW"
## #table_name = "ENBS_IDW"
- table_names = ["HI_IDW",]
+ table_names = ["NBS_IDW",]
+
+ from create_region_bathymetry_director import preprocessing
preprocessing(project_gdb=project_gdb, table_names=table_names, clear_folder=True)
+ del preprocessing
+
for table_name in table_names:
region_gdb = rf"{os.path.dirname(project_gdb)}\Scratch\{table_name}.gdb"
try:
@@ -381,50 +246,43 @@ def script_tool(project_gdb=""):
del elapse_time, end_time, start_time
del gmtime, localtime, strftime, time
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
- traceback.print_exc()
- sys.exit()
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
if __name__ == '__main__':
try:
project_gdb = arcpy.GetParameterAsText(0)
if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025\August 1 2025.gdb"
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
else:
pass
script_tool(project_gdb)
arcpy.SetParameterAsText(1, "Result")
del project_gdb
- except:
- traceback.print_exc()
- else:
- pass
- finally:
- pass
\ No newline at end of file
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/create_region_fishnets_director.py b/ArcGIS-Analysis-Python/src/dismap_tools/create_region_fishnets_director.py
index a1637dc..cbca10b 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/create_region_fishnets_director.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/create_region_fishnets_director.py
@@ -9,9 +9,9 @@
# Copyright: (c) john.f.kennedy 2024
# Licence:
#-------------------------------------------------------------------------------
-import os, sys # built-ins first
+import os
+import sys
import traceback
-
import inspect
import arcpy # third-parties second
@@ -22,10 +22,6 @@ def director(project_gdb="", Sequential=True, table_names=[]):
import dismap_tools
from create_region_fishnets_worker import worker
- # Test if passed workspace exists, if not sys.exit()
- if not arcpy.Exists(project_gdb):
- sys.exit()(f"{os.path.basename(project_gdb)} is missing!!")
-
# Set History and Metadata logs, set serverity and message level
arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
arcpy.SetLogMetadata(True)
@@ -36,19 +32,19 @@ def director(project_gdb="", Sequential=True, table_names=[]):
# Set basic workkpace variables
project_folder = os.path.dirname(project_gdb)
- scratch_folder = rf"{project_folder}\Scratch"
- scratch_workspace = rf"{project_folder}\Scratch\scratch.gdb"
+ scratch_folder = os.path.join(project_folder, "Scratch")
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
csv_data_folder = rf"{project_folder}\CSV_Data"
# Clear Scratch Folder
dismap_tools.clear_folder(folder=scratch_folder)
# Create Scratch Workspace for Project
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
if not arcpy.Exists(scratch_folder):
- os.makedirs(rf"{scratch_folder}")
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
- arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"scratch")
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", "scratch")
# Set basic workkpace variables
arcpy.env.workspace = project_gdb
@@ -59,9 +55,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del project_folder
if not table_names:
- table_names = [row[0] for row in arcpy.da.SearchCursor(f"{project_gdb}\Datasets",
- "TableName",
- where_clause = "TableName LIKE '%_IDW'")]
+ table_names = [row[0] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"),"TableName", where_clause = "TableName LIKE '%_IDW'")]
else:
pass
@@ -69,39 +63,40 @@ def director(project_gdb="", Sequential=True, table_names=[]):
for table_name in table_names:
arcpy.AddMessage(f"Pre-Processing: {table_name}")
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
- region_scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ region_scratch_workspace = os.path.join(scratch_folder, f"{table_name}", "scratch.gdb")
# Create Scratch Workspace for Region
if not arcpy.Exists(region_scratch_workspace):
- os.makedirs(rf"{scratch_folder}\{table_name}")
+ os.makedirs(os.path.join(scratch_folder, table_name))
if not arcpy.Exists(region_scratch_workspace):
- arcpy.management.CreateFileGDB(rf"{scratch_folder}\{table_name}", f"scratch")
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, table_name), "scratch")
del region_scratch_workspace
- datasets = [rf"{project_gdb}\Datasets", rf"{project_gdb}\{table_name}_Region"]
+ datasets = [os.path.join(project_gdb, "Datasets"), os.path.join(project_gdb, f"{table_name}_Region")]
if not any(arcpy.management.GetCount(d)[0] == 0 for d in datasets):
- if not arcpy.Exists(rf"{scratch_folder}\{table_name}.gdb"):
+ if not arcpy.Exists(os.path.join(scratch_folder, f"{table_name}.gdb")):
arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
else:
pass
- arcpy.management.Copy(rf"{project_gdb}\Datasets", rf"{region_gdb}\Datasets")
+ arcpy.management.Copy(os.path.join(project_gdb, "Datasets"), rf"{region_gdb}\Datasets")
arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- arcpy.management.Copy(rf"{project_gdb}\{table_name}_Region", rf"{region_gdb}\{table_name}_Region")
+ arcpy.management.Copy(os.path.join(project_gdb, f"{table_name}_Region"), rf"{region_gdb}\{table_name}_Region")
arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
else:
- arcpy.AddWarning(f"One or more datasets contains zero records!!")
+ arcpy.AddWarning("One or more datasets contains zero records!!")
for d in datasets:
arcpy.AddMessage(f"\t{os.path.basename(d)} has {arcpy.management.GetCount(d)[0]} records")
del d
arcpy.AddError(f"SystemExit at line number: '{traceback.extract_stack()[-1].lineno}'")
sys.exit()
- if "datasets" in locals().keys(): del datasets
+ if "datasets" in locals().keys():
+ del datasets
del region_gdb, table_name
@@ -109,15 +104,15 @@ def director(project_gdb="", Sequential=True, table_names=[]):
# Sequential Processing
if Sequential:
- arcpy.AddMessage(f"Sequential Processing")
+ arcpy.AddMessage("Sequential Processing")
for i in range(0, len(table_names)):
arcpy.AddMessage(f"Processing: {table_names[i]}")
table_name = table_names[i]
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
try:
pass
worker(region_gdb=region_gdb)
- except:
+ except: # noqa: E722
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
sys.exit()
@@ -128,7 +123,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
if not Sequential:
import multiprocessing
from time import time, localtime, strftime, sleep, gmtime
- arcpy.AddMessage(f"Start multiprocessing using the ArcGIS Pro pythonw.exe.")
+ arcpy.AddMessage("Sequential Processing")
#Set multiprocessing exe in case we're running as an embedded process, i.e ArcGIS
#get_install_path() uses a registry query to figure out 64bit python exe if available
multiprocessing.set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
@@ -139,17 +134,17 @@ def director(project_gdb="", Sequential=True, table_names=[]):
#Create a pool of workers, keep one cpu free for surfing the net.
#Let each worker process only handle 1 task before being restarted (in case of nasty memory leaks)
with multiprocessing.Pool(processes=_processes, maxtasksperchild=1) as pool:
- arcpy.AddMessage(f"\tPrepare arguments for processing")
+ arcpy.AddMessage("\tPrepare arguments for processing")
# Use apply_async so we can handle exceptions gracefully
jobs={}
for i in range(0, len(table_names)):
try:
arcpy.AddMessage(f"Processing: {table_names[i]}")
table_name = table_names[i]
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
jobs[table_name] = pool.apply_async(worker, [region_gdb])
del table_name, region_gdb
- except:
+ except: # noqa: E722
pool.terminate()
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
@@ -165,8 +160,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
end_time = time()
elapse_time = end_time - start_time
arcpy.AddMessage(f"\nStart Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
- arcpy.AddMessage(f"Have the workers finished?")
- arcpy.AddMessage(f"Have the workers finished?")
+ arcpy.AddMessage("Have the workers finished?")
finish_time = strftime('%a %b %d %I:%M %p', localtime())
time_elapsed = u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time)))
arcpy.AddMessage(f"It's {finish_time}\n{time_elapsed}")
@@ -184,7 +178,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
sys.exit()
- except:
+ except: # noqa: E722
pool.terminate()
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
@@ -203,11 +197,11 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del result_completed
del start_time
del all_finished
- arcpy.AddMessage(f"\tClose the process pool")
+ arcpy.AddMessage("\tClose the process pool")
# close the process pool
pool.close()
# wait for all tasks to complete and processes to close
- arcpy.AddMessage(f"\tWait for all tasks to complete and processes to close")
+ arcpy.AddMessage("\tWait for all tasks to complete and processes to close")
pool.join()
# Just in case
pool.terminate()
@@ -215,7 +209,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del jobs
del _processes
del time, multiprocessing, localtime, strftime, sleep, gmtime
- arcpy.AddMessage(f"\tDone with multiprocessing Pool")
+ arcpy.AddMessage("\tDone with multiprocessing Pool")
arcpy.AddMessage("Post-Processing")
arcpy.AddMessage("Processing Results")
@@ -229,7 +223,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del dirpath, dirnames, filenames
del walk
for dataset in datasets:
- datasets_short_path = f"{os.path.basename(os.path.dirname(os.path.dirname(dataset)))}\{os.path.basename(os.path.dirname(dataset))}\{os.path.basename(dataset)}"
+ datasets_short_path = f".. {'/'.join(dataset.split(os.sep)[-4:])}"
dataset_name = os.path.basename(dataset)
region_gdb = os.path.dirname(dataset)
arcpy.AddMessage(f"\tDataset: '{dataset_name}'")
@@ -272,14 +266,16 @@ def director(project_gdb="", Sequential=True, table_names=[]):
arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
- except:
+ except: # noqa: E722
arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
else:
# While in development, leave here. For test, move to finally
rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
return True
finally:
pass
@@ -292,7 +288,7 @@ def script_tool(project_gdb=""):
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
@@ -304,11 +300,11 @@ def script_tool(project_gdb=""):
del project_folder
# Create project scratch workspace, if missing
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
if not arcpy.Exists(scratch_folder):
- os.makedirs(rf"{scratch_folder}")
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
- arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"scratch")
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", "scratch")
del scratch_folder
# Set basic arcpy.env variables
@@ -322,17 +318,13 @@ def script_tool(project_gdb=""):
test = False
if test:
- #director(project_gdb=project_gdb, Sequential=True, table_names=["HI_IDW"])
- #director(project_gdb=project_gdb, Sequential=False, table_names=["SEUS_SPR_IDW", "HI_IDW"])
- #director(project_gdb=project_gdb, Sequential=False, table_names=["SEUS_SPR_IDW", "SEUS_FAL_IDW",])
director(project_gdb=project_gdb, Sequential=True, table_names=["NBS_IDW", "SEUS_FAL_IDW"])
else:
- #director(project_gdb=project_gdb, Sequential=False, table_names=["NBS_IDW", "ENBS_IDW", "HI_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW"])
- director(project_gdb=project_gdb, Sequential=False, table_names=["WC_TRI_IDW", "GMEX_IDW", "AI_IDW", "GOA_IDW", "WC_ANN_IDW"])
- director(project_gdb=project_gdb, Sequential=False, table_names=["NEUS_SPR_IDW", "EBS_IDW", "NEUS_FAL_IDW", "SEUS_SUM_IDW"])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW",])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",])
del test
- except:
+ except: # noqa: E722
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
sys.exit()
@@ -371,14 +363,16 @@ def script_tool(project_gdb=""):
arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
- except:
+ except: # noqa: E722
arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
else:
# While in development, leave here. For test, move to finally
rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
return True
finally:
pass
@@ -387,13 +381,14 @@ def script_tool(project_gdb=""):
try:
project_gdb = arcpy.GetParameterAsText(0)
if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025\August 1 2025.gdb"
+ #project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\February 1 2026\February 1 2026.gdb"
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
else:
pass
script_tool(project_gdb)
arcpy.SetParameterAsText(1, "Result")
del project_gdb
- except:
+ except: # noqa: E722
traceback.print_exc()
else:
pass
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/create_region_fishnets_worker.py b/ArcGIS-Analysis-Python/src/dismap_tools/create_region_fishnets_worker.py
index a0f3dbe..f87f6a4 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/create_region_fishnets_worker.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/create_region_fishnets_worker.py
@@ -9,9 +9,9 @@
# Copyright: (c) john.f.kennedy 2024
# Licence:
#-------------------------------------------------------------------------------
-import os, sys # built-ins first
+import os
+import sys
import traceback
-
import inspect
import arcpy # third-parties second
@@ -493,14 +493,16 @@ def worker(region_gdb=""):
arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
- except:
+ except: # noqa: E722
arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
else:
# While in development, leave here. For test, move to finally
rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
return True
finally:
pass
@@ -513,7 +515,7 @@ def script_tool(project_gdb=""):
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
@@ -532,41 +534,41 @@ def script_tool(project_gdb=""):
dismap_tools.clear_folder(folder=scratch_folder)
# Create project scratch workspace, if missing
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
if not arcpy.Exists(scratch_folder):
- os.makedirs(rf"{scratch_folder}")
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
- arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"scratch")
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", "scratch")
# Set worker parameters
- table_name = "AI_IDW"
+ #table_name = "AI_IDW"
#table_name = "GMEX_IDW"
#table_name = "HI_IDW"
#table_name = "SEUS_FAL_IDW"
- #table_name = "NBS_IDW"
+ table_name = "NBS_IDW"
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
if not arcpy.Exists(scratch_workspace):
- os.makedirs(rf"{scratch_folder}\{table_name}")
+ os.makedirs(os.path.join(scratch_folder, table_name))
if not arcpy.Exists(scratch_workspace):
- arcpy.management.CreateFileGDB(rf"{scratch_folder}\{table_name}", f"scratch")
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, f"{table_name}"), "scratch")
del scratch_workspace
# Setup worker workspace and copy data
- #datasets = [rf"{project_gdb}\Datasets", rf"{project_gdb}\{table_name}_Region"]
+ #datasets = [ros.path.join(project_gdb, "Datasets") os.path.join(project_gdb, f"{table_name}_Region")]
#if not any(arcpy.management.GetCount(d)[0] == 0 for d in datasets):
- if not arcpy.Exists(rf"{scratch_folder}\{table_name}.gdb"):
+ if not arcpy.Exists(os.path.join(scratch_folder, f"{table_name}.gdb")):
arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
else:
pass
- arcpy.management.Copy(rf"{project_gdb}\Datasets", rf"{region_gdb}\Datasets")
+ arcpy.management.Copy(os.path.join(project_gdb, "Datasets"), rf"{region_gdb}\Datasets")
arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- arcpy.management.Copy(rf"{project_gdb}\{table_name}_Region", rf"{region_gdb}\{table_name}_Region")
+ arcpy.management.Copy(os.path.join(project_gdb, f"{table_name}_Region"), rf"{region_gdb}\{table_name}_Region")
arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
#else:
@@ -579,7 +581,7 @@ def script_tool(project_gdb=""):
try:
pass
- #worker(region_gdb=region_gdb)
+ worker(region_gdb=region_gdb)
except SystemExit:
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
@@ -623,14 +625,16 @@ def script_tool(project_gdb=""):
arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
- except:
+ except: # noqa: E722
arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
else:
# While in development, leave here. For test, move to finally
rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
return True
finally:
pass
@@ -639,13 +643,13 @@ def script_tool(project_gdb=""):
try:
project_gdb = arcpy.GetParameterAsText(0)
if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025\August 1 2025.gdb"
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
else:
pass
script_tool(project_gdb)
arcpy.SetParameterAsText(1, "Result")
del project_gdb
- except:
+ except: # noqa: E722
traceback.print_exc()
else:
pass
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/create_region_sample_locations_director.py b/ArcGIS-Analysis-Python/src/dismap_tools/create_region_sample_locations_director.py
index 5cd659d..8d1587a 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/create_region_sample_locations_director.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/create_region_sample_locations_director.py
@@ -9,13 +9,21 @@
# Copyright: (c) john.f.kennedy 2024
# Licence:
#-------------------------------------------------------------------------------
-import os, sys # built-ins first
+import os
+import sys
import traceback
-import inspect
-
import arcpy # third-parties second
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
def director(project_gdb="", Sequential=True, table_names=[]):
try:
# Imports
@@ -34,19 +42,19 @@ def director(project_gdb="", Sequential=True, table_names=[]):
arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
project_folder = os.path.dirname(project_gdb)
- scratch_folder = rf"{project_folder}\Scratch"
- scratch_workspace = rf"{project_folder}\Scratch\scratch.gdb"
+ scratch_folder = os.path.join(project_folder, "Scratch")
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
csv_data_folder = rf"{project_folder}\CSV Data"
# Clear Scratch Folder
dismap_tools.clear_folder(folder=scratch_folder)
# Create Scratch Workspace for Project
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
if not arcpy.Exists(scratch_folder):
- os.makedirs(rf"{scratch_folder}")
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
- arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"scratch")
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", "scratch")
arcpy.env.workspace = project_gdb
arcpy.env.scratchWorkspace = scratch_workspace
@@ -56,50 +64,48 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del project_folder, scratch_workspace
if not table_names:
- table_names = [row[0] for row in arcpy.da.SearchCursor(f"{project_gdb}\Datasets",
- "TableName",
- where_clause = "TableName LIKE '%_IDW'")]
+ table_names = [row[0] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"), "TableName", where_clause = "TableName LIKE '%_IDW'")]
else:
pass
# Pre Processing
for table_name in table_names:
arcpy.AddMessage(f"Pre-Processing: {table_name}")
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
region_scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
# Create Scratch Workspace for Region
if not arcpy.Exists(region_scratch_workspace):
- os.makedirs(rf"{scratch_folder}\{table_name}")
+ os.makedirs(os.path.join(scratch_folder, table_name))
if not arcpy.Exists(region_scratch_workspace):
- arcpy.management.CreateFileGDB(rf"{scratch_folder}\{table_name}", f"scratch")
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, f"{table_name}"), "scratch")
del region_scratch_workspace
- #datasets = [rf"{project_gdb}\Datasets", rf"{project_gdb}\{table_name}_Region"]
+ #datasets = [ros.path.join(project_gdb, "Datasets") os.path.join(project_gdb, f"{table_name}_Region")]
#if not any(arcpy.management.GetCount(d)[0] == 0 for d in datasets):
- if not arcpy.Exists(rf"{scratch_folder}\{table_name}.gdb"):
+ if not arcpy.Exists(os.path.join(scratch_folder, f"{table_name}.gdb")):
arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
else:
pass
- arcpy.management.Copy(rf"{project_gdb}\Datasets", rf"{region_gdb}\Datasets")
+ arcpy.management.Copy(os.path.join(project_gdb, "Datasets"), rf"{region_gdb}\Datasets")
arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- arcpy.management.Copy(rf"{project_gdb}\{table_name}_Region", rf"{region_gdb}\{table_name}_Region")
+ arcpy.management.Copy(os.path.join(project_gdb, f"{table_name}_Region"), rf"{region_gdb}\{table_name}_Region")
arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
del region_gdb, table_name
# Sequential Processing
if Sequential:
- arcpy.AddMessage(f"Sequential Processing")
+ arcpy.AddMessage("Sequential Processing")
for i in range(0, len(table_names)):
arcpy.AddMessage(f"Processing: {table_names[i]}")
table_name = table_names[i]
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
try:
worker(region_gdb=region_gdb)
- except:
+ except: # noqa: E722
traceback.print_exc()
sys.exit()
del region_gdb, table_name
@@ -109,11 +115,11 @@ def director(project_gdb="", Sequential=True, table_names=[]):
# Non-Sequential Processing
if not Sequential:
- arcpy.AddMessage(f"Non-Sequential Processing")
+ arcpy.AddMessage("Non-Sequential Processing")
# Imports
import multiprocessing
from time import time, localtime, strftime, sleep, gmtime
- arcpy.AddMessage(f"Start multiprocessing using the ArcGIS Pro pythonw.exe.")
+ arcpy.AddMessage("Start multiprocessing using the ArcGIS Pro pythonw.exe.")
#Set multiprocessing exe in case we're running as an embedded process, i.e ArcGIS
#get_install_path() uses a registry query to figure out 64bit python exe if available
multiprocessing.set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
@@ -124,17 +130,17 @@ def director(project_gdb="", Sequential=True, table_names=[]):
#Create a pool of workers, keep one cpu free for surfing the net.
#Let each worker process only handle 1 task before being restarted (in case of nasty memory leaks)
with multiprocessing.Pool(processes=_processes, maxtasksperchild=1) as pool:
- arcpy.AddMessage(f"\tPrepare arguments for processing")
+ arcpy.AddMessage("\tPrepare arguments for processing")
# Use apply_async so we can handle exceptions gracefully
jobs={}
for i in range(0, len(table_names)):
try:
arcpy.AddMessage(f"Processing: {table_names[i]}")
table_name = table_names[i]
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
jobs[table_name] = pool.apply_async(worker, [region_gdb])
del table_name, region_gdb
- except:
+ except: # noqa: E722
pool.terminate()
traceback.print_exc()
sys.exit()
@@ -149,8 +155,8 @@ def director(project_gdb="", Sequential=True, table_names=[]):
end_time = time()
elapse_time = end_time - start_time
arcpy.AddMessage(f"\nStart Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
- arcpy.AddMessage(f"Have the workers finished?")
- arcpy.AddMessage(f"Have the workers finished?")
+ arcpy.AddMessage("Have the workers finished?")
+
finish_time = strftime('%a %b %d %I:%M %p', localtime())
time_elapsed = u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time)))
arcpy.AddMessage(f"It's {finish_time}\n{time_elapsed}")
@@ -181,11 +187,11 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del result_completed
del start_time
del all_finished
- arcpy.AddMessage(f"\tClose the process pool")
+ arcpy.AddMessage("\tClose the process pool")
# close the process pool
pool.close()
# wait for all tasks to complete and processes to close
- arcpy.AddMessage(f"\tWait for all tasks to complete and processes to close")
+ arcpy.AddMessage("\tWait for all tasks to complete and processes to close")
pool.join()
# Just in case
pool.terminate()
@@ -193,7 +199,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del jobs
del _processes
del time, multiprocessing, localtime, strftime, sleep, gmtime
- arcpy.AddMessage(f"\tDone with multiprocessing Pool")
+ arcpy.AddMessage("\tDone with multiprocessing Pool")
# Post-Processing
arcpy.AddMessage("Post-Processing Begins")
@@ -207,7 +213,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del dirpath, dirnames, filenames
del walk
for dataset in datasets:
- datasets_short_path = f"{os.path.basename(os.path.dirname(os.path.dirname(dataset)))}\{os.path.basename(os.path.dirname(dataset))}\{os.path.basename(dataset)}"
+ datasets_short_path = f".. {'/'.join(dataset.split(os.sep)[-4:])}"
dataset_name = os.path.basename(dataset)
region_gdb = os.path.dirname(dataset)
arcpy.AddMessage(f"\tDataset: '{dataset_name}'")
@@ -231,34 +237,22 @@ def director(project_gdb="", Sequential=True, table_names=[]):
# Function Parameters
del project_gdb, Sequential, table_names
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
def script_tool(project_gdb=""):
try:
@@ -268,7 +262,7 @@ def script_tool(project_gdb=""):
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
@@ -285,17 +279,17 @@ def script_tool(project_gdb=""):
try:
# "AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW", "NEUS_FAL_IDW", "NEUS_SPR_IDW",
# "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",
- Test = True
+ Test = False
if Test:
director(project_gdb=project_gdb, Sequential=True, table_names=["GMEX_IDW"])
#director(project_gdb=project_gdb, Sequential=True, table_names=["NBS_IDW", "HI_IDW"])
elif not Test:
- #director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW", "NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW"])
- director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", "ENBS_IDW",])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW",])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",])
else:
pass
del Test
- except:
+ except: # noqa: E722
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
sys.exit()
@@ -319,46 +313,43 @@ def script_tool(project_gdb=""):
del elapse_time, end_time, start_time
del gmtime, localtime, strftime, time
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
if __name__ == '__main__':
try:
project_gdb = arcpy.GetParameterAsText(0)
if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025\August 1 2025.gdb"
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
else:
pass
script_tool(project_gdb)
arcpy.SetParameterAsText(1, "Result")
del project_gdb
- except:
- traceback.print_exc()
- else:
- pass
- finally:
- pass
\ No newline at end of file
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/create_region_sample_locations_worker.py b/ArcGIS-Analysis-Python/src/dismap_tools/create_region_sample_locations_worker.py
index 5dbda07..3ba78d4 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/create_region_sample_locations_worker.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/create_region_sample_locations_worker.py
@@ -9,19 +9,24 @@
# Copyright: (c) john.f.kennedy 2024
# Licence:
#-------------------------------------------------------------------------------
-import os, sys # built-ins first
+import os
+import sys
import traceback
-
import inspect
import arcpy # third-parties second
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
def worker(region_gdb=""):
try:
- # Test if passed workspace exists, if not sys.exit()
- if not arcpy.Exists(rf"{region_gdb}"):
- raise SystemExist(f"{os.path.basename(region_gdb)} is missing!!")
-
# Import the dismap_tools module to access tools
import dismap_tools
# Import
@@ -47,7 +52,7 @@ def worker(region_gdb=""):
project_folder = os.path.dirname(scratch_folder)
csv_data_folder = rf"{project_folder}\CSV_Data"
process_table = rf"{csv_data_folder}\{table_name}.csv"
- scratch_workspace = rf"{scratch_folder}\scratch.gdb"
+ scratch_workspace = os.path.join(scratch_folder, "scratch.gdb")
del scratch_folder
# Set basic workkpace variables
@@ -142,7 +147,7 @@ def worker(region_gdb=""):
#for column in list(df.columns): arcpy.AddMessage(column); del column # print columns
# ###--->>>
- arcpy.AddMessage(f"Inserting additional columns into the dataframe\n")
+ arcpy.AddMessage("Inserting additional columns into the dataframe\n")
arcpy.AddMessage(f"\tInserting 'DatasetCode' column into: {table_name}")
df.insert(0, "DatasetCode", datasetcode)
@@ -160,7 +165,7 @@ def worker(region_gdb=""):
if "MapValue" not in list(df.columns):
df.insert(df.columns.get_loc("WTCPUE")+1, "MapValue", np.nan)
#-->> MapValue
- arcpy.AddMessage(f"\tCalculating the MapValue values")
+ arcpy.AddMessage("\tCalculating the MapValue values")
df["MapValue"] = df["WTCPUE"].pow((1.0/3.0))
arcpy.AddMessage(f"\tInserting 'SpeciesCommonName' column into: {table_name}")
@@ -175,7 +180,7 @@ def worker(region_gdb=""):
#if "IDW" in table_name:
arcpy.AddMessage(f"\tInserting 'Season' {season} column into: {table_name}")
if "Season" not in list(df.columns):
- df.insert(df.columns.get_loc("Region")+1, "Season", season if season != None else "")
+ df.insert(df.columns.get_loc("Region")+1, "Season", season if season is not None else "")
arcpy.AddMessage(f"\tInserting 'SummaryProduct' column into: {table_name}")
if "SummaryProduct" not in list(df.columns):
@@ -220,28 +225,28 @@ def worker(region_gdb=""):
# ###--->>>
#arcpy.AddMessage(f"Updating and calculating new values for some columns\n")
#-->> DistributionProjectName
- arcpy.AddMessage(f"\tSetting 'NaN' in 'DistributionProjectName' to ''")
+ arcpy.AddMessage("\tSetting 'NaN' in 'DistributionProjectName' to ''")
#df.loc[df['DistributionProjectName'] == 'nan', 'DistributionProjectName'] = ""
df["DistributionProjectName"] = df["DistributionProjectName"].fillna("")
#-->> CommonName
- arcpy.AddMessage(f"\tSetting 'NaN' in 'CommonName' to ''")
+ arcpy.AddMessage("\tSetting 'NaN' in 'CommonName' to ''")
#df.loc[df['CommonName'] == 'nan', 'CommonName'] = ""
df["CommonName"] = df["CommonName"].fillna("")
- arcpy.AddMessage(f"\tSetting 'CommonName' unicode'")
+ arcpy.AddMessage("\tSetting 'CommonName' unicode'")
# Cast text as Unicode in the CommonName field
df["CommonName"] = df["CommonName"].astype("unicode")
#-->> SpeciesCommonName
- arcpy.AddMessage(f"\tCalculating SpeciesCommonName and setting it to 'Species (CommonName)'")
+ arcpy.AddMessage("\tCalculating SpeciesCommonName and setting it to 'Species (CommonName)'")
df["SpeciesCommonName"] = np.where(df["CommonName"] != "", df["Species"] + ' (' + df["CommonName"] + ')', "")
#-->> CommonNameSpecies
- arcpy.AddMessage(f"\tCalculating CommonNameSpecies and setting it to 'CommonName (Species)'")
+ arcpy.AddMessage("\tCalculating CommonNameSpecies and setting it to 'CommonName (Species)'")
df["CommonNameSpecies"] = np.where(df["CommonName"] != "", df["CommonName"] + ' (' + df["Species"] + ')', "")
- arcpy.AddMessage(f"\tReplacing Infinity values with Nulls")
+ arcpy.AddMessage("\tReplacing Infinity values with Nulls")
# Replace Inf with Nulls
# For some cell values in the 'WTCPUE' column, there is an Inf
# value representing an infinit
@@ -266,10 +271,10 @@ def worker(region_gdb=""):
arcpy.AddMessage(f"\nDataframe report:\n{df.head(5)}\n")
- arcpy.AddMessage(f"Converting the Dataframe to an NumPy Array\n")
+ arcpy.AddMessage("Converting the Dataframe to an NumPy Array\n")
try:
array = np.array(np.rec.fromrecords(df.values), dtype = field_gdb_dtypes)
- except:
+ except: # noqa: E722
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
sys.exit()
@@ -293,7 +298,7 @@ def worker(region_gdb=""):
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
sys.exit()
- except:
+ except: # noqa: E722
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
sys.exit()
@@ -364,7 +369,7 @@ def worker(region_gdb=""):
del gsr_wkt, psr_wkt
del transformation
- arcpy.AddMessage(f"\tMake XY Event layer for IDW datasets")
+ arcpy.AddMessage("\tMake XY Event layer for IDW datasets")
# Set the output coordinate system to what is needed for the
# DisMAP project
#gsr = "GEOGCS['GCS_WGS_1984',DATUM['D_WGS_1984',SPHEROID['WGS_1984',6378137.0,298.257223563]],PRIMEM['Greenwich',0.0],UNIT['Degree',0.0174532925199433]]"
@@ -413,7 +418,8 @@ def worker(region_gdb=""):
# Get the count of records for selected species
getcount = arcpy.management.GetCount(out_features)[0]
- arcpy.AddMessage(f"\t{os.path.basename(out_features)} has {getcount} records"); del getcount
+ arcpy.AddMessage(f"\t{os.path.basename(out_features)} has {getcount} records")
+ del getcount
else:
pass
@@ -474,14 +480,16 @@ def worker(region_gdb=""):
arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
- except:
+ except: # noqa: E722
arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
else:
# While in development, leave here. For test, move to finally
rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
return True
finally:
pass
@@ -494,7 +502,7 @@ def script_tool(project_gdb=""):
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
@@ -523,11 +531,11 @@ def script_tool(project_gdb=""):
sys.exit()(f"{os.path.basename(project_gdb)} is missing!!")
# Create project scratch workspace, if missing
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
if not arcpy.Exists(scratch_folder):
- os.makedirs(rf"{scratch_folder}")
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
- arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"scratch")
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", "scratch")
# Set worker parameters
table_name = "AI_IDW"
@@ -537,28 +545,28 @@ def script_tool(project_gdb=""):
#table_name = "GMEX_IDW"
#table_name = "ENBS_IDW"
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
# Create worker scratch workspace, if missing
if not arcpy.Exists(scratch_workspace):
- os.makedirs(rf"{scratch_folder}\{table_name}")
+ os.makedirs(os.path.join(scratch_folder, table_name))
if not arcpy.Exists(scratch_workspace):
- arcpy.management.CreateFileGDB(rf"{scratch_folder}\{table_name}", f"scratch")
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, f"{table_name}"), "scratch")
del scratch_workspace
# Setup worker workspace and copy data
- #datasets = [rf"{project_gdb}\Datasets", rf"{project_gdb}\{table_name}_Region"]
+ #datasets = [ros.path.join(project_gdb, "Datasets") os.path.join(project_gdb, f"{table_name}_Region")]
#if not any(arcpy.management.GetCount(d)[0] == 0 for d in datasets):
- if not arcpy.Exists(rf"{scratch_folder}\{table_name}.gdb"):
+ if not arcpy.Exists(os.path.join(scratch_folder, f"{table_name}.gdb")):
arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
else:
pass
- arcpy.management.Copy(rf"{project_gdb}\Datasets", rf"{region_gdb}\Datasets")
+ arcpy.management.Copy(os.path.join(project_gdb, "Datasets"), rf"{region_gdb}\Datasets")
arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- arcpy.management.Copy(rf"{project_gdb}\{table_name}_Region", rf"{region_gdb}\{table_name}_Region")
+ arcpy.management.Copy(os.path.join(project_gdb, f"{table_name}_Region"), rf"{region_gdb}\{table_name}_Region")
arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
#else:
@@ -600,48 +608,43 @@ def script_tool(project_gdb=""):
del elapse_time, end_time, start_time
del gmtime, localtime, strftime, time
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
if __name__ == '__main__':
try:
project_gdb = arcpy.GetParameterAsText(0)
if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025\August 1 2025.gdb"
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
else:
pass
script_tool(project_gdb)
arcpy.SetParameterAsText(1, "Result")
del project_gdb
- except:
- traceback.print_exc()
- else:
- pass
- finally:
- pass
\ No newline at end of file
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/create_regions_from_shapefiles_director.py b/ArcGIS-Analysis-Python/src/dismap_tools/create_regions_from_shapefiles_director.py
index d8d20d3..4acc894 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/create_regions_from_shapefiles_director.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/create_regions_from_shapefiles_director.py
@@ -9,10 +9,10 @@
# Copyright: (c) john.f.kennedy 2024
# Licence:
#-------------------------------------------------------------------------------
-import os, sys # built-ins first
+import os
+import sys
import traceback
import inspect
-import shutil
import arcpy # third-parties second
@@ -21,14 +21,14 @@ def create_dismap_regions(project_gdb=""):
import dismap_tools
project_folder = os.path.dirname(project_gdb)
- csv_data_folder = rf"{project_folder}\CSV_Data"
+ csv_data_folder = os.path.join(project_folder, "CSV_Data")
arcpy.env.overwriteOutput = True
- if arcpy.Exists(rf"{project_gdb}\DisMAP_Regions"):
- arcpy.management.Delete(rf"{project_gdb}\DisMAP_Regions")
+ if arcpy.Exists(os.path.join(project_gdb, "DisMAP_Regions")):
+ arcpy.management.Delete(os.path.join(project_gdb, "DisMAP_Regions"))
- arcpy.AddMessage(f"Creating: 'DisMAP_Regions'")
+ arcpy.AddMessage("Creating: 'DisMAP_Regions'")
# Execute Tool
# Spatial Reference factory code of 4326 is : GCS_WGS_1984
# Spatial Reference factory code of 5714 is : Mean Sea Level (Height)
@@ -50,7 +50,7 @@ def create_dismap_regions(project_gdb=""):
arcpy.AddMessage("\tCreate Featureclass: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
del sp_ref
dismap_tools.add_fields(csv_data_folder, os.path.join(project_gdb, "DisMAP_Regions"))
- dismap_tools.import_metadata(dataset=rf"{project_gdb}\DisMAP_Regions")
+ dismap_tools.import_metadata(csv_data_folder, os.path.join(project_gdb, "DisMAP_Regions"))
# Imports
del dismap_tools
@@ -73,14 +73,16 @@ def create_dismap_regions(project_gdb=""):
arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
- except:
+ except: # noqa: E722
arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
else:
# While in development, leave here. For test, move to finally
rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
return True
finally:
arcpy.management.ClearWorkspaceCache()
@@ -89,9 +91,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
try:
# Imports
from arcpy import metadata as md
- # Test if passed workspace exists, if not sys.exit()
- if not arcpy.Exists(project_gdb):
- sys.exit()(f"{os.path.basename(project_gdb)} is missing!!")
+
# Imports
import dismap_tools
from create_regions_from_shapefiles_worker import worker
@@ -105,19 +105,19 @@ def director(project_gdb="", Sequential=True, table_names=[]):
arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
project_folder = os.path.dirname(project_gdb)
- scratch_folder = rf"{project_folder}\Scratch"
- scratch_workspace = rf"{project_folder}\Scratch\scratch.gdb"
+ scratch_folder = os.path.join(project_folder, "Scratch")
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
csv_data_folder = rf"{project_folder}\CSV_Data"
# Clear Scratch Folder
dismap_tools.clear_folder(folder=scratch_folder)
# Create Scratch Workspace for Project
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
if not arcpy.Exists(scratch_folder):
- os.makedirs(rf"{scratch_folder}")
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
- arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"scratch")
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", "scratch")
arcpy.env.workspace = project_gdb
arcpy.env.scratchWorkspace = scratch_workspace
@@ -129,7 +129,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
create_dismap_regions(project_gdb)
if not table_names:
- table_names = [row[0] for row in arcpy.da.SearchCursor(f"{project_gdb}\Datasets",
+ table_names = [row[0] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"),
"TableName",
where_clause = "TableName LIKE '%_IDW'")]
else:
@@ -138,30 +138,30 @@ def director(project_gdb="", Sequential=True, table_names=[]):
# Pre Processing
for table_name in table_names:
arcpy.AddMessage(f"Pre-Processing: {table_name}")
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
region_scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
# Create Scratch Workspace for Region
if not arcpy.Exists(region_scratch_workspace):
- os.makedirs(rf"{scratch_folder}\{table_name}")
+ os.makedirs(os.path.join(scratch_folder, table_name))
if not arcpy.Exists(region_scratch_workspace):
- arcpy.management.CreateFileGDB(rf"{scratch_folder}\{table_name}", f"scratch")
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, f"{table_name}"), "scratch")
del region_scratch_workspace
- #datasets = [rf"{project_gdb}\Datasets"]
+ #datasets = [ros.path.join(project_gdb, "Datasets"]
#if not any(arcpy.management.GetCount(d)[0] == 0 for d in datasets):
- if not arcpy.Exists(rf"{scratch_folder}\{table_name}.gdb"):
+ if not arcpy.Exists(os.path.join(scratch_folder, f"{table_name}.gdb")):
arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
else:
pass
- arcpy.management.Copy(rf"{project_gdb}\Datasets", rf"{region_gdb}\Datasets")
+ arcpy.management.Copy(os.path.join(project_gdb, "Datasets"), rf"{region_gdb}\Datasets")
arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- arcpy.management.CreateFeatureclass(rf"{region_gdb}", "DisMAP_Regions", "POLYLINE", rf"{project_gdb}\DisMAP_Regions")
+ arcpy.management.CreateFeatureclass(rf"{region_gdb}", "DisMAP_Regions", "POLYLINE", os.path.join(project_gdb, "DisMAP_Regions"))
arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- dismap_regions_md = md.Metadata(rf"{project_gdb}\DisMAP_Regions")
+ dismap_regions_md = md.Metadata(os.path.join(project_gdb, "DisMAP_Regions"))
dataset_md = md.Metadata(rf"{region_gdb}\DisMAP_Regions")
dataset_md.copy(dismap_regions_md)
dataset_md.save()
@@ -183,11 +183,11 @@ def director(project_gdb="", Sequential=True, table_names=[]):
# Sequential Processing
if Sequential:
- arcpy.AddMessage(f"Sequential Processing")
+ arcpy.AddMessage("Sequential Processing")
for i in range(0, len(table_names)):
arcpy.AddMessage(f"Processing: {table_names[i]}")
table_name = table_names[i]
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
try:
worker(region_gdb=region_gdb)
except SystemExit:
@@ -201,11 +201,11 @@ def director(project_gdb="", Sequential=True, table_names=[]):
# Non-Sequential Processing
if not Sequential:
- arcpy.AddMessage(f"Non-Sequential Processing")
+ arcpy.AddMessage("Non-Sequential Processing")
# Imports
import multiprocessing
from time import time, localtime, strftime, sleep, gmtime
- arcpy.AddMessage(f"Start multiprocessing using the ArcGIS Pro pythonw.exe.")
+ arcpy.AddMessage("Start multiprocessing using the ArcGIS Pro pythonw.exe.")
#Set multiprocessing exe in case we're running as an embedded process, i.e ArcGIS
#get_install_path() uses a registry query to figure out 64bit python exe if available
multiprocessing.set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
@@ -216,17 +216,17 @@ def director(project_gdb="", Sequential=True, table_names=[]):
#Create a pool of workers, keep one cpu free for surfing the net.
#Let each worker process only handle 1 task before being restarted (in case of nasty memory leaks)
with multiprocessing.Pool(processes=_processes, maxtasksperchild=1) as pool:
- arcpy.AddMessage(f"\tPrepare arguments for processing")
+ arcpy.AddMessage("\tPrepare arguments for processing")
# Use apply_async so we can handle exceptions gracefully
jobs={}
for i in range(0, len(table_names)):
try:
arcpy.AddMessage(f"Processing: {table_names[i]}")
table_name = table_names[i]
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
jobs[table_name] = pool.apply_async(worker, [region_gdb])
del table_name, region_gdb
- except:
+ except: # noqa: E722
pool.terminate()
traceback.print_exc()
sys.exit()
@@ -241,7 +241,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
end_time = time()
elapse_time = end_time - start_time
arcpy.AddMessage(f"\nStart Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
- arcpy.AddMessage(f"Have the workers finished?")
+ arcpy.AddMessage("Have the workers finished?")
finish_time = strftime('%a %b %d %I:%M %p', localtime())
time_elapsed = u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time)))
arcpy.AddMessage(f"It's {finish_time}\n{time_elapsed}")
@@ -272,11 +272,11 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del result_completed
del start_time
del all_finished
- arcpy.AddMessage(f"\tClose the process pool")
+ arcpy.AddMessage("\tClose the process pool")
# close the process pool
pool.close()
# wait for all tasks to complete and processes to close
- arcpy.AddMessage(f"\tWait for all tasks to complete and processes to close")
+ arcpy.AddMessage("\tWait for all tasks to complete and processes to close")
pool.join()
# Just in case
pool.terminate()
@@ -284,7 +284,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del jobs
del _processes
del time, multiprocessing, localtime, strftime, sleep, gmtime
- arcpy.AddMessage(f"\tDone with multiprocessing Pool")
+ arcpy.AddMessage("\tDone with multiprocessing Pool")
# Post-Processing
arcpy.AddMessage("Post-Processing Begins")
@@ -299,7 +299,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del walk
for dataset in datasets:
#print(dataset)
- datasets_short_path = f"{os.path.basename(os.path.dirname(os.path.dirname(dataset)))}\{os.path.basename(os.path.dirname(dataset))}\{os.path.basename(dataset)}"
+ datasets_short_path = f".. {'/'.join(dataset.split(os.sep)[-4:])}"
dataset_name = os.path.basename(dataset)
region_gdb = os.path.dirname(dataset)
arcpy.AddMessage(f"\tDataset: '{dataset_name}'")
@@ -311,7 +311,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
arcpy.AddMessage(f"\tAppending the {dataset_name} Dataset to the DisMAP Regions Dataset")
# Process: Append
arcpy.management.Append(inputs = rf"{project_gdb}\{dataset_name}",
- target = rf"{project_gdb}\DisMAP_Regions",
+ target = os.path.join(project_gdb, "DisMAP_Regions"),
schema_type = "NO_TEST",
field_mapping = "",
subtype = "")
@@ -350,14 +350,16 @@ def director(project_gdb="", Sequential=True, table_names=[]):
arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
- except:
+ except: # noqa: E722
arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
else:
# While in development, leave here. For test, move to finally
rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
return True
finally:
pass
@@ -370,7 +372,7 @@ def script_tool(project_gdb=""):
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
@@ -386,16 +388,15 @@ def script_tool(project_gdb=""):
director(project_gdb=project_gdb, Sequential=True, table_names=["SEUS_SPR_IDW", "HI_IDW"])
#create_dismap_regions(project_gdb)
elif not test:
- #director(project_gdb=project_gdb, Sequential=False, table_names=["NBS_IDW", "ENBS_IDW", "HI_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW",])
- #director(project_gdb=project_gdb, Sequential=False, table_names=["WC_TRI_IDW", "GMEX_IDW", "AI_IDW", "GOA_IDW", "WC_ANN_IDW", "NEUS_FAL_IDW",])
- #director(project_gdb=project_gdb, Sequential=False, table_names=["NEUS_SPR_IDW", "EBS_IDW"])
- director(project_gdb=project_gdb, Sequential=False, table_names=[])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW",])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",])
del test
- except:
+ except: # noqa: E722
traceback.print_exc()
sys.exit()
# Declared Varaiables
+ del project_gdb
# Elapsed time
end_time = time()
elapse_time = end_time - start_time
@@ -428,14 +429,16 @@ def script_tool(project_gdb=""):
arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
- except:
+ except: # noqa: E722
arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
else:
# While in development, leave here. For test, move to finally
rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
return True
finally:
pass
@@ -444,13 +447,13 @@ def script_tool(project_gdb=""):
try:
project_gdb = arcpy.GetParameterAsText(0)
if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025\August 1 2025.gdb"
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
else:
pass
script_tool(project_gdb)
arcpy.SetParameterAsText(1, "Result")
del project_gdb
- except:
+ except: # noqa: E722
traceback.print_exc()
else:
pass
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/create_regions_from_shapefiles_worker.py b/ArcGIS-Analysis-Python/src/dismap_tools/create_regions_from_shapefiles_worker.py
index 270258a..61b15d5 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/create_regions_from_shapefiles_worker.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/create_regions_from_shapefiles_worker.py
@@ -9,9 +9,9 @@
# Copyright: (c) john.f.kennedy 2024
# Licence:
#-------------------------------------------------------------------------------
-import os, sys # built-ins first
+import os
+import sys
import traceback
-
import inspect
import arcpy # third-parties second
@@ -82,8 +82,10 @@ def worker(region_gdb=""):
arcpy.AddMessage(f"\tSeason: {season}")
arcpy.AddMessage(f"\tDistri Code: {distri_code}")
- geographicarea_sr = os.path.join(f"{project_folder}", "Dataset_Shapefiles", f"{table_name}", f"{geographic_area}.prj")
- datasetcode_sr = arcpy.SpatialReference(geographicarea_sr); del geographicarea_sr
+ geographicarea_sr = os.path.join(project_folder, f"Dataset_Shapefiles\\{table_name}\\{geographic_area}.prj")
+ arcpy.AddMessage(geographicarea_sr)
+ datasetcode_sr = arcpy.SpatialReference(geographicarea_sr)
+ del geographicarea_sr
if datasetcode_sr.linearUnitName == "Kilometer":
arcpy.env.cellSize = 1
@@ -218,14 +220,16 @@ def worker(region_gdb=""):
arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
- except:
+ except: # noqa: E722
arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
else:
# While in development, leave here. For test, move to finally
rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
return True
finally:
pass
@@ -240,7 +244,7 @@ def script_tool(project_gdb=""):
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
@@ -252,18 +256,18 @@ def script_tool(project_gdb=""):
# Set varaibales
project_folder = os.path.dirname(project_gdb)
- scratch_folder = rf"{project_folder}\Scratch"
+ scratch_folder = os.path.join(project_folder, "Scratch")
del project_folder
# Clear Scratch Folder
dismap_tools.clear_folder(folder=scratch_folder)
# Create project scratch workspace, if missing
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
if not arcpy.Exists(scratch_folder):
- os.makedirs(rf"{scratch_folder}")
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
- arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"scratch")
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(scratch_folder, "scratch")
else:
pass
@@ -275,14 +279,14 @@ def script_tool(project_gdb=""):
#table_name = "NBS_IDW"
#table_name = "SEUS_SPR_IDW"
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
- scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ scratch_workspace = os.path.join(scratch_folder, f"{table_name}\\scratch.gdb")
# Create worker scratch workspace, if missing
if not arcpy.Exists(scratch_workspace):
- os.makedirs(rf"{scratch_folder}\{table_name}")
+ os.makedirs(os.path.join(scratch_folder, table_name))
if not arcpy.Exists(scratch_workspace):
- arcpy.management.CreateFileGDB(rf"{scratch_folder}\{table_name}", f"scratch")
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, table_name), "scratch")
del scratch_workspace
## edit = arcpy.da.Editor(region_gdb)
@@ -293,21 +297,21 @@ def script_tool(project_gdb=""):
## arcpy.AddMessage("operation started")
# Setup worker workspace and copy data
- #datasets = [rf"{project_gdb}\Datasets", rf"{project_gdb}\DisMAP_Regions"]
+ #datasets = [ros.path.join(project_gdb, "Datasets") os.path.join(project_gdb, "DisMAP_Regions")]
#if not any(arcpy.management.GetCount(d)[0] == 0 for d in datasets):
- if not arcpy.Exists(rf"{scratch_folder}\{table_name}.gdb"):
- arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
+ if not arcpy.Exists(os.path.join(scratch_folder, f"{table_name}.gdb")):
+ arcpy.management.CreateFileGDB(scratch_folder, table_name)
arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
else:
pass
- arcpy.management.Copy(rf"{project_gdb}\Datasets", rf"{region_gdb}\Datasets")
+ arcpy.management.Copy(os.path.join(project_gdb, "Datasets"), rf"{region_gdb}\Datasets")
arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- arcpy.management.CreateFeatureclass(rf"{region_gdb}", "DisMAP_Regions", "POLYLINE", rf"{project_gdb}\DisMAP_Regions")
+ arcpy.management.CreateFeatureclass(rf"{region_gdb}", "DisMAP_Regions", "POLYLINE", os.path.join(project_gdb, "DisMAP_Regions"))
arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- dismap_regions_md = md.Metadata(rf"{project_gdb}\DisMAP_Regions")
+ dismap_regions_md = md.Metadata(os.path.join(project_gdb, "DisMAP_Regions"))
dataset_md = md.Metadata(rf"{region_gdb}\DisMAP_Regions")
dataset_md.copy(dismap_regions_md)
dataset_md.save()
@@ -327,7 +331,7 @@ def script_tool(project_gdb=""):
try:
pass
- #worker(region_gdb=region_gdb)
+ worker(region_gdb=region_gdb)
except SystemExit:
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
@@ -336,7 +340,7 @@ def script_tool(project_gdb=""):
# Declared Varaiables
del region_gdb, table_name, scratch_folder
# Imports
- del director, md, dismap_tools
+ del md, dismap_tools
# Function Parameters
del project_gdb
@@ -372,14 +376,16 @@ def script_tool(project_gdb=""):
arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
- except:
+ except: # noqa: E722
arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
else:
# While in development, leave here. For test, move to finally
rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
return True
finally:
pass
@@ -388,13 +394,13 @@ def script_tool(project_gdb=""):
try:
project_gdb = arcpy.GetParameterAsText(0)
if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025\August 1 2025.gdb"
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
else:
pass
script_tool(project_gdb)
arcpy.SetParameterAsText(1, "Result")
del project_gdb
- except:
+ except: # noqa: E722
traceback.print_exc()
else:
pass
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/create_species_richness_rasters_director.py b/ArcGIS-Analysis-Python/src/dismap_tools/create_species_richness_rasters_director.py
index e910e68..a512b1b 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/create_species_richness_rasters_director.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/create_species_richness_rasters_director.py
@@ -9,15 +9,141 @@
# Copyright: (c) john.f.kennedy 2024
# Licence:
#-------------------------------------------------------------------------------
-import os, sys # built-ins first
+import os
+import sys
import traceback
-import inspect
import arcpy # third-parties second
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ #filename = sys.path[0] + os.sep + f"{os.path.basename(__file__)}"
+ filename = os.path.basename(__file__)
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def preprocessing(project_gdb="", table_names="", clear_folder=True):
+ try:
+ import dismap_tools
+
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic arcpy.env variables
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ # Set varaibales
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{project_folder}\Scratch"
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
+
+ # Clear Scratch Folder
+ #ClearScratchFolder = True
+ #if ClearScratchFolder:
+ if clear_folder:
+ dismap_tools.clear_folder(folder=rf"{os.path.dirname(project_gdb)}\Scratch")
+ else:
+ pass
+ #del ClearScratchFolder
+ del clear_folder
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ del project_folder, scratch_workspace
+
+ if not table_names:
+ table_names = [row[0] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"),
+ "TableName",
+ where_clause = "TableName LIKE '%_IDW'")]
+ else:
+ pass
+
+ for table_name in table_names:
+ arcpy.AddMessage(f"Pre-Processing: {table_name}")
+
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ region_scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
+
+ # Create Scratch Workspace for Region
+ if not arcpy.Exists(region_scratch_workspace):
+ os.makedirs(os.path.join(scratch_folder, table_name))
+ if not arcpy.Exists(region_scratch_workspace):
+ arcpy.AddMessage(f"Create File GDB: '{table_name}'")
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, f"{table_name}"), "scratch")
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ del region_scratch_workspace
+ # # # CreateFileGDB
+ arcpy.AddMessage(f"Creating File GDB: '{table_name}'")
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # # # CreateFileGDB
+ # # # Datasets
+ # Process: Make Table View (Make Table View) (management)
+ datasets = rf'{project_gdb}\Datasets'
+ arcpy.AddMessage(f"'{os.path.basename(datasets)}' has {arcpy.management.GetCount(datasets)[0]} records")
+
+ table_name_view = "Dataset Table View"
+ arcpy.management.MakeTableView(in_table = datasets,
+ out_view = table_name_view,
+ where_clause = f"TableName = '{table_name}'"
+ )
+ arcpy.AddMessage(f"The table '{table_name_view}' has {arcpy.management.GetCount(table_name_view)[0]} records")
+ arcpy.management.CopyRows(table_name_view, rf"{region_gdb}\Datasets")
+ arcpy.AddMessage("\tCopy Rows: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.management.Delete(table_name_view)
+ del table_name_view
+ # # # Datasets
+ # # # LayerSpeciesYearImageName
+ #arcpy.AddMessage(f"The table '{table_name}_LayerSpeciesYearImageName' has {arcpy.management.GetCount(table_name_view)[0]} records")
+ arcpy.management.Copy(rf"{project_gdb}\{table_name}_LayerSpeciesYearImageName", rf"{region_gdb}\{table_name}_LayerSpeciesYearImageName")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # # # LayerSpeciesYearImageName
+ # # # Raster_Mask
+ arcpy.management.Copy(rf"{project_gdb}\{table_name}_Raster_Mask", rf"{region_gdb}\{table_name}_Raster_Mask")
+ arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+ # # # Raster_Mask
+
+ del datasets #, filter_region, filter_subregion
+ # Leave so we can block the above code
+ # Declared Variables
+ del table_name
+
+ # Declared Variables
+ del scratch_folder, region_gdb
+ # Imports
+ del dismap_tools
+ # Function Parameters
+ del project_gdb, table_names
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
def director(project_gdb="", Sequential=True, table_names=[]):
try:
- from create_species_richness_rasters_worker import preprocessing, worker
+ from create_species_richness_rasters_worker import worker
# Test if passed workspace exists, if not sys.exit()
if not arcpy.Exists(rf"{project_gdb}"):
@@ -41,7 +167,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
## project_folder = os.path.dirname(project_gdb)
## scratch_folder = rf"{os.path.dirname(project_gdb)}\Scratch"
-## scratch_workspace = rf"{project_folder}\Scratch\scratch.gdb"
+## scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
##
## arcpy.env.workspace = project_gdb
## arcpy.env.scratchWorkspace = scratch_workspace
@@ -49,7 +175,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
# Sequential Processing
if Sequential:
- arcpy.AddMessage(f"Sequential Processing")
+ arcpy.AddMessage("Sequential Processing")
for i in range(0, len(table_names)):
arcpy.AddMessage(f"Processing: {table_names[i]}")
table_name = table_names[i]
@@ -57,7 +183,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
try:
pass
worker(region_gdb=region_gdb)
- except:
+ except: # noqa: E722
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
sys.exit()
@@ -70,7 +196,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
if not Sequential:
import multiprocessing
from time import time, localtime, strftime, sleep, gmtime
- arcpy.AddMessage(f"Start multiprocessing using the ArcGIS Pro pythonw.exe.")
+ arcpy.AddMessage("Start multiprocessing using the ArcGIS Pro pythonw.exe.")
#Set multiprocessing exe in case we're running as an embedded process, i.e ArcGIS
#get_install_path() uses a registry query to figure out 64bit python exe if available
multiprocessing.set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
@@ -81,7 +207,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
#Create a pool of workers, keep one cpu free for surfing the net.
#Let each worker process only handle 1 task before being restarted (in case of nasty memory leaks)
with multiprocessing.Pool(processes=_processes, maxtasksperchild=1) as pool:
- arcpy.AddMessage(f"\tPrepare arguments for processing")
+ arcpy.AddMessage("\tPrepare arguments for processing")
# Use apply_async so we can handle exceptions gracefully
jobs={}
for i in range(0, len(table_names)):
@@ -91,7 +217,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
region_gdb = rf"{os.path.dirname(project_gdb)}\Scratch\{table_name}.gdb"
jobs[table_name] = pool.apply_async(worker, [region_gdb])
del table_name, region_gdb
- except:
+ except: # noqa: E722
pool.terminate()
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
@@ -106,7 +232,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
# Elapsed time
end_time = time()
elapse_time = end_time - start_time
- arcpy.AddMessage(f"\nHave the workers finished?")
+ arcpy.AddMessage("\nHave the workers finished?")
finish_time = strftime('%a %b %d %I:%M %p', localtime())
time_elapsed = u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time)))
arcpy.AddMessage(f"It's {finish_time}\n{time_elapsed}")
@@ -120,7 +246,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
try:
# wait for and get the result from the task
result.get()
- except:
+ except: # noqa: E722
pool.terminate()
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
@@ -139,11 +265,11 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del result_completed
del start_time
del all_finished
- arcpy.AddMessage(f"\tClose the process pool")
+ arcpy.AddMessage("\tClose the process pool")
# close the process pool
pool.close()
# wait for all tasks to complete and processes to close
- arcpy.AddMessage(f"\tWait for all tasks to complete and processes to close")
+ arcpy.AddMessage("\tWait for all tasks to complete and processes to close")
pool.join()
# Just in case
pool.terminate()
@@ -152,7 +278,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del _processes
del time, multiprocessing, localtime, strftime, sleep, gmtime
- arcpy.AddMessage(f"\tDone with multiprocessing Pool")
+ arcpy.AddMessage("\tDone with multiprocessing Pool")
arcpy.AddMessage(f"Compacting the {os.path.basename(project_gdb)} GDB")
arcpy.management.Compact(project_gdb)
@@ -161,38 +287,26 @@ def director(project_gdb="", Sequential=True, table_names=[]):
# Declared Variables assigned in function
#del scratch_folder
# Imports
- del worker, preprocessing
+ del worker
# Function Parameters
del project_gdb, Sequential, table_names
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
def script_tool(project_gdb=""):
try:
@@ -202,7 +316,7 @@ def script_tool(project_gdb=""):
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
@@ -216,18 +330,16 @@ def script_tool(project_gdb=""):
Test = True
if Test:
pass
- director(project_gdb=project_gdb, Sequential=True, table_names=["SEUS_FAL_IDW"])
+ #director(project_gdb=project_gdb, Sequential=True, table_names=["SEUS_FAL_IDW"])
+ director(project_gdb=project_gdb, Sequential=True, table_names=["AI_IDW","EBS_IDW","NBS_IDW","NEUS_FAL_IDW",])
elif not Test:
- director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", "EBS_IDW", "GOA_IDW"])
- director(project_gdb=project_gdb, Sequential=False, table_names=["ENBS_IDW", "GOA_IDW", "NBS_IDW",])
- director(project_gdb=project_gdb, Sequential=False, table_names=["SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW",])
- director(project_gdb=project_gdb, Sequential=False, table_names=["HI_IDW", "WC_ANN_IDW", "WC_TRI_IDW",])
- director(project_gdb=project_gdb, Sequential=False, table_names=["GMEX_IDW", "NEUS_FAL_IDW", "NEUS_SPR_IDW",])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW",])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",])
else:
pass
del Test
#except SystemExit:
- except:
+ except: # noqa: E722
pass
#arcpy.AddError(arcpy.GetMessages(2))
#traceback.print_exc()
@@ -253,48 +365,43 @@ def script_tool(project_gdb=""):
del elapse_time, end_time, start_time
del gmtime, localtime, strftime, time
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
if __name__ == '__main__':
try:
project_gdb = arcpy.GetParameterAsText(0)
if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025\August 1 2025.gdb"
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
else:
pass
script_tool(project_gdb)
arcpy.SetParameterAsText(1, "Result")
del project_gdb
- except:
- traceback.print_exc()
- else:
- pass
- finally:
- pass
\ No newline at end of file
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/create_species_richness_rasters_worker.py b/ArcGIS-Analysis-Python/src/dismap_tools/create_species_richness_rasters_worker.py
index f8f0347..f059dc7 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/create_species_richness_rasters_worker.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/create_species_richness_rasters_worker.py
@@ -9,13 +9,22 @@
# Copyright: (c) john.f.kennedy 2024
# Licence:
#-------------------------------------------------------------------------------
-import os, sys # built-ins first
+import os
+import sys
import traceback
-import inspect
-
import arcpy # third-parties second
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ #filename = sys.path[0] + os.sep + f"{os.path.basename(__file__)}"
+ filename = os.path.basename(__file__)
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
def print_table(table=""):
try:
""" Print first 5 rows of a table """
@@ -30,15 +39,27 @@ def print_table(table=""):
del row
del desc, fields, oid
del table
- except:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
def worker(region_gdb=""):
try:
# Test if passed workspace exists, if not sys.exit()
- if not arcpy.Exists(rf"{region_gdb}"):
+ if not arcpy.Exists(region_gdb):
arcpy.AddError(f"{os.path.basename(region_gdb)} is missing!!")
sys.exit()
@@ -78,7 +99,7 @@ def worker(region_gdb=""):
arcpy.AddMessage(f"Creating {table_name} Species Richness Rasters")
- arcpy.AddMessage(f"\tGet list of variables from the 'Datasets' table")
+ arcpy.AddMessage("\tGet list of variables from the 'Datasets' table")
# DatasetCode, CSVFile, TransformUnit, TableName, GeographicArea, CellSize,
# PointFeatureType, FeatureClassName, Region, Season, DateCode, Status,
@@ -99,6 +120,12 @@ def worker(region_gdb=""):
cell_size = region_list[3]
del region_list
+
+ if isinstance(cell_size, type('str')):
+ cell_size = int(cell_size)
+ else:
+ pass
+
arcpy.AddMessage(f"\tGet the 'rowCount', 'columnCount', and 'lowerLeft' corner of '{table_name}_Raster_Mask'")
# These are used later to set the rows and columns for a zero numpy array
rowCount = int(arcpy.management.GetRasterProperties(region_raster_mask, "ROWCOUNT" ).getOutput(0))
@@ -109,7 +136,7 @@ def worker(region_gdb=""):
lowerLeft = arcpy.Point(raster_mask_extent.extent.XMin, raster_mask_extent.extent.YMin)
del raster_mask_extent
- arcpy.AddMessage(f"\tSet the 'outputCoordinateSystem' based on the projection information for the geographic region")
+ arcpy.AddMessage("\tSet the 'outputCoordinateSystem' based on the projection information for the geographic region")
#geographic_area_sr = rf"{project_folder}\Dataset_Shapefiles\{table_name}\{geographic_area}.prj"
#geographic_area_sr = arcpy.Describe(region_raster_mask).spatialReference
#geographic_area_sr = rf"{project_folder}\Dataset_Shapefiles\{table_name}\{geographic_area}.prj"
@@ -119,7 +146,7 @@ def worker(region_gdb=""):
#del geographic_area_sr, geographic_area, psr
#del geographic_area
- arcpy.AddMessage(f"\tGet information for input rasters")
+ arcpy.AddMessage("\tGet information for input rasters")
layerspeciesyearimagename = rf"{region_gdb}\{table_name}_LayerSpeciesYearImageName"
@@ -143,7 +170,7 @@ def worker(region_gdb=""):
# arcpy.AddMessage(input_raster, input_rasters[input_raster])
# del input_raster
- arcpy.AddMessage(f"\tSet the output and scratch paths")
+ arcpy.AddMessage("\tSet the output and scratch paths")
# Set species_richness_path
species_richness_path = rf"{project_folder}\Images\{table_name}\_Species Richness"
@@ -156,7 +183,7 @@ def worker(region_gdb=""):
years = sorted(list(set([input_rasters[input_raster][2] for input_raster in input_rasters])))
- arcpy.AddMessage(f"\tProcessing all species")
+ arcpy.AddMessage("\tProcessing all species")
for year in years:
@@ -188,7 +215,7 @@ def worker(region_gdb=""):
arcpy.AddMessage(f"\t\tCreating Species Richness Raster for year: {year}")
- # Cast array as float32
+ # Cast array as float321
richnessArray = richnessArray.astype('float32')
# Convert Array to Raster
@@ -217,7 +244,7 @@ def worker(region_gdb=""):
# ###--->>>
- arcpy.AddMessage(f"\tCreating the {table_name} Core Species Richness Rasters")
+ arcpy.AddMessage("\tCreating the {table_name} Core Species Richness Rasters")
# Set core_species_richness_path
core_species_richness_path = rf"{project_folder}\Images\{table_name}\_Core Species Richness"
@@ -231,7 +258,7 @@ def worker(region_gdb=""):
years = sorted(list(set([input_rasters[input_raster][2] for input_raster in input_rasters if input_rasters[input_raster][1] == "Yes"])))
# ###--->>>
- arcpy.AddMessage(f"\t\tProcessing Core Species")
+ arcpy.AddMessage("\t\tProcessing Core Species")
for year in years:
@@ -304,173 +331,34 @@ def worker(region_gdb=""):
# Function parameter
del region_gdb
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
-
-def preprocessing(project_gdb="", table_names="", clear_folder=True):
- try:
- import dismap_tools
-
- arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
- arcpy.SetLogMetadata(True)
- arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
- # 1—If a tool produces a warning or an error, it will throw an exception.
- # 2—If a tool produces an error, it will throw an exception. This is the default.
- arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
-
- # Set basic arcpy.env variables
- arcpy.env.overwriteOutput = True
- arcpy.env.parallelProcessingFactor = "100%"
-
- # Set varaibales
- project_folder = os.path.dirname(project_gdb)
- scratch_folder = rf"{project_folder}\Scratch"
- scratch_workspace = rf"{project_folder}\Scratch\scratch.gdb"
-
- # Clear Scratch Folder
- #ClearScratchFolder = True
- #if ClearScratchFolder:
- if clear_folder:
- dismap_tools.clear_folder(folder=rf"{os.path.dirname(project_gdb)}\Scratch")
- else:
- pass
- #del ClearScratchFolder
- del clear_folder
-
- arcpy.env.workspace = project_gdb
- arcpy.env.scratchWorkspace = scratch_workspace
- del project_folder, scratch_workspace
-
- if not table_names:
- table_names = [row[0] for row in arcpy.da.SearchCursor(f"{project_gdb}\Datasets",
- "TableName",
- where_clause = "TableName LIKE '%_IDW'")]
- else:
- pass
-
- for table_name in table_names:
- arcpy.AddMessage(f"Pre-Processing: {table_name}")
-
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
- region_scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
-
- # Create Scratch Workspace for Region
- if not arcpy.Exists(region_scratch_workspace):
- os.makedirs(rf"{scratch_folder}\{table_name}")
- if not arcpy.Exists(region_scratch_workspace):
- arcpy.AddMessage(f"Create File GDB: '{table_name}'")
- arcpy.management.CreateFileGDB(rf"{scratch_folder}\{table_name}", f"scratch")
- arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- del region_scratch_workspace
- # # # CreateFileGDB
- arcpy.AddMessage(f"Creating File GDB: '{table_name}'")
- arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
- arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- # # # CreateFileGDB
- # # # Datasets
- # Process: Make Table View (Make Table View) (management)
- datasets = rf'{project_gdb}\Datasets'
- arcpy.AddMessage(f"'{os.path.basename(datasets)}' has {arcpy.management.GetCount(datasets)[0]} records")
-
- table_name_view = "Dataset Table View"
- arcpy.management.MakeTableView(in_table = datasets,
- out_view = table_name_view,
- where_clause = f"TableName = '{table_name}'"
- )
- arcpy.AddMessage(f"The table '{table_name_view}' has {arcpy.management.GetCount(table_name_view)[0]} records")
- arcpy.management.CopyRows(table_name_view, rf"{region_gdb}\Datasets")
- arcpy.AddMessage("\tCopy Rows: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
-
- arcpy.management.Delete(table_name_view)
- del table_name_view
- # # # Datasets
- # # # LayerSpeciesYearImageName
- #arcpy.AddMessage(f"The table '{table_name}_LayerSpeciesYearImageName' has {arcpy.management.GetCount(table_name_view)[0]} records")
- arcpy.management.Copy(rf"{project_gdb}\{table_name}_LayerSpeciesYearImageName", rf"{region_gdb}\{table_name}_LayerSpeciesYearImageName")
- arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- # # # LayerSpeciesYearImageName
- # # # Raster_Mask
- arcpy.management.Copy(rf"{project_gdb}\{table_name}_Raster_Mask", rf"{region_gdb}\{table_name}_Raster_Mask")
- arcpy.AddMessage("\tCopy: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
- # # # Raster_Mask
-
- del datasets #, filter_region, filter_subregion
- # Leave so we can block the above code
- # Declared Variables
- del table_name
-
- # Declared Variables
- del scratch_folder, region_gdb
- # Imports
- del dismap_tools
- # Function Parameters
- del project_gdb, table_names
-
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
- except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
- return True
- finally:
- pass
def script_tool(project_gdb=""):
try:
# Imports
import dismap_tools
+ from create_species_richness_rasters_director import preprocessing
from time import gmtime, localtime, strftime, time
# Set a start time so that we can see how log things take
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
@@ -511,7 +399,7 @@ def script_tool(project_gdb=""):
# Declared Varaiables
# Imports
- del dismap_tools
+ del dismap_tools, preprocessing
# Function Parameters
del project_gdb
@@ -530,48 +418,43 @@ def script_tool(project_gdb=""):
del elapse_time, end_time, start_time
del gmtime, localtime, strftime, time
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
if __name__ == '__main__':
try:
project_gdb = arcpy.GetParameterAsText(0)
if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025\August 1 2025.gdb"
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
else:
pass
script_tool(project_gdb)
arcpy.SetParameterAsText(1, "Result")
del project_gdb
- except:
- traceback.print_exc()
- else:
- pass
- finally:
- pass
\ No newline at end of file
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/create_species_year_image_name_table_director.py b/ArcGIS-Analysis-Python/src/dismap_tools/create_species_year_image_name_table_director.py
index 815cb46..2263419 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/create_species_year_image_name_table_director.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/create_species_year_image_name_table_director.py
@@ -9,13 +9,166 @@
# Copyright: (c) john.f.kennedy 2024
# Licence:
#-------------------------------------------------------------------------------
-import os, sys # built-ins first
+import os
+import sys
import traceback
-import inspect
-
import arcpy # third-parties second
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def preprocessing(project_gdb="", table_names="", clear_folder=True):
+ try:
+ import dismap_tools
+
+ arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
+ arcpy.SetLogMetadata(True)
+ arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
+ # 1—If a tool produces a warning or an error, it will throw an exception.
+ # 2—If a tool produces an error, it will throw an exception. This is the default.
+ arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
+
+ # Set basic arcpy.env variables
+ arcpy.env.overwriteOutput = True
+ arcpy.env.parallelProcessingFactor = "100%"
+
+ # Set varaibales
+ project_folder = os.path.dirname(project_gdb)
+ scratch_folder = rf"{project_folder}\Scratch"
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
+
+ # Clear Scratch Folder
+ #ClearScratchFolder = True
+ #if ClearScratchFolder:
+ if clear_folder:
+ dismap_tools.clear_folder(folder=scratch_folder)
+ else:
+ pass
+ #del ClearScratchFolder
+ del clear_folder
+
+ arcpy.env.workspace = project_gdb
+ arcpy.env.scratchWorkspace = scratch_workspace
+ del project_folder, scratch_workspace
+
+ if not table_names:
+ table_names = [row[0] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"),
+ "TableName",
+ where_clause = "TableName LIKE '%_IDW'")]
+ else:
+ pass
+
+ for table_name in table_names:
+ arcpy.AddMessage(f"Pre-Processing: {table_name}")
+
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
+ region_scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
+
+ # Create Scratch Workspace for Region
+ if not arcpy.Exists(region_scratch_workspace):
+ os.makedirs(os.path.join(scratch_folder, table_name))
+ if not arcpy.Exists(region_scratch_workspace):
+ arcpy.management.CreateFileGDB(os.path.join(scratch_folder, f"{table_name}"), "scratch")
+ del region_scratch_workspace
+
+ arcpy.AddMessage(f"Creating File GDB: {table_name}")
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
+ arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ # Process: Make Table View (Make Table View) (management)
+ datasets = rf'{project_gdb}\Datasets'
+ arcpy.AddMessage(f"\t{os.path.basename(datasets)} has {arcpy.management.GetCount(datasets)[0]} records")
+
+ table_name_view = "Dataset Table View"
+ arcpy.management.MakeTableView(in_table = datasets,
+ out_view = table_name_view,
+ where_clause = f"TableName = '{table_name}'"
+ )
+ arcpy.AddMessage(f"\tThe table {table_name_view} has {arcpy.management.GetCount(table_name_view)[0]} records")
+ arcpy.management.CopyRows(table_name_view, rf"{region_gdb}\Datasets")
+ arcpy.AddMessage("\tCopy Rows: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ filter_region = [row[0] for row in arcpy.da.SearchCursor(rf"{region_gdb}\Datasets", "FilterRegion")][0].replace("'", "''")
+ filter_subregion = [row[0] for row in arcpy.da.SearchCursor(rf"{region_gdb}\Datasets", "FilterSubRegion")][0].replace("'", "''")
+
+ arcpy.management.Delete(table_name_view)
+ del table_name_view
+
+ region_table = rf"{project_gdb}\{table_name}"
+ arcpy.AddMessage(f"\t{os.path.basename(region_table)} has {arcpy.management.GetCount(region_table)[0]} records")
+ # Process: Make Table View (Make Table View) (management)
+ table_name_view = "IDW Table View"
+ arcpy.management.MakeTableView(in_table = region_table,
+ out_view = table_name_view,
+ where_clause = "DistributionProjectName = 'NMFS/Rutgers IDW Interpolation'"
+ )
+ # Process: Copy Rows (Copy Rows) (management)
+ arcpy.AddMessage(f"\t{table_name_view} has {arcpy.management.GetCount(table_name_view)[0]} records")
+ arcpy.management.CopyRows(in_rows = table_name_view, out_table = rf"{region_gdb}\{table_name}")
+ arcpy.AddMessage("\tCopy Rows: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.management.Delete(table_name_view)
+ del table_name_view
+
+ # Process: Make Table View (Make Table View) (management)
+ #arcpy.AddMessage(filter_subregion)
+ species_filter = rf"{project_gdb}\Species_Filter"
+ arcpy.AddMessage(f"\t{os.path.basename(species_filter)} has {arcpy.management.GetCount(species_filter)[0]} records")
+ table_name_view = "Species Filter Table View"
+ arcpy.management.MakeTableView(in_table = species_filter,
+ out_view = table_name_view,
+ #where_clause = f"FilterSubRegion = '{filter_subregion}'",
+ where_clause = f"FilterSubRegion = '{filter_subregion}' AND DistributionProjectName = 'NMFS/Rutgers IDW Interpolation'",
+ workspace=region_gdb,
+ field_info="OBJECTID OBJECTID VISIBLE NONE;Species Species VISIBLE NONE;CommonName CommonName VISIBLE NONE;TaxonomicGroup TaxonomicGroup VISIBLE NONE;FilterRegion FilterRegion VISIBLE NONE;FilterSubRegion FilterSubRegion VISIBLE NONE;ManagementBody ManagementBody VISIBLE NONE;ManagementPlan ManagementPlan VISIBLE NONE;DistributionProjectName DistributionProjectName VISIBLE NONE"
+ )
+
+ arcpy.AddMessage(f"\t{table_name_view} has {arcpy.management.GetCount(table_name_view)[0]} records")
+ arcpy.management.CopyRows(in_rows = table_name_view, out_table = rf"{region_gdb}\Species_Filter")
+ arcpy.AddMessage("\tCopy Rows: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
+
+ arcpy.management.Delete(table_name_view)
+ del table_name_view
+ #print(filter_region, filter_subregion)
+ #
+ del region_table, species_filter
+ del datasets, filter_region, filter_subregion
+ # Leave so we can block the above code
+ # Declared Variables
+ del table_name
+
+ # Declared Variables
+ del scratch_folder, region_gdb
+ # Imports
+ del dismap_tools
+ # Function Parameters
+ del project_gdb, table_names
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ sys.exit()
+ return False
+ else:
+ return True
+
def director(project_gdb="", Sequential=True, table_names=[]):
try:
# Test if passed workspace exists, if not sys.exit()
@@ -23,7 +176,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
sys.exit()(f"{os.path.basename(project_gdb)} is missing!!")
import dismap_tools
- from create_species_year_image_name_table_worker import preprocessing, worker
+ from create_species_year_image_name_table_worker import worker
arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
arcpy.SetLogMetadata(True)
@@ -35,127 +188,27 @@ def director(project_gdb="", Sequential=True, table_names=[]):
arcpy.env.parallelProcessingFactor = "100%"
project_folder = os.path.dirname(project_gdb)
- scratch_folder = rf"{project_folder}\Scratch"
+ scratch_folder = os.path.join(project_folder, "Scratch")
del project_folder
- #scratch_workspace = rf"{project_folder}\Scratch\scratch.gdb"
+ #scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
#csv_data_folder = rf"{project_folder}\CSV_Data"
#arcpy.env.workspace = project_gdb
#arcpy.env.scratchWorkspace = scratch_workspace
#del project_folder, scratch_workspace
-## # Clear Scratch Folder
-## ClearScratchFolder = True
-## if ClearScratchFolder:
-## dismap_tools.clear_folder(folder=scratch_folder)
-## else:
-## pass
-## del ClearScratchFolder
preprocessing(project_gdb=project_gdb, table_names=table_names, clear_folder=True)
-## # Create Scratch Workspace for Project
-## if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
-## if not arcpy.Exists(scratch_folder):
-## os.makedirs(rf"{scratch_folder}")
-## if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
-## arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"scratch")
-
-
-# # # # Moved to preprocessing
-## if not table_names:
-## table_names = [row[0] for row in arcpy.da.SearchCursor(f"{project_gdb}\Datasets",
-## "TableName",
-## where_clause = "TableName LIKE '%_IDW'")]
-## else:
-## pass
-
-## #print(table_names)
-## #sys.exit()
-##
-## # Pre Processing
-## for table_name in table_names:
-## arcpy.AddMessage(f"Pre-Processing: {table_name}")
-##
-## region_gdb = rf"{scratch_folder}\{table_name}.gdb"
-## region_scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
-##
-## # Create Scratch Workspace for Region
-## if not arcpy.Exists(region_scratch_workspace):
-## os.makedirs(rf"{scratch_folder}\{table_name}")
-## if not arcpy.Exists(region_scratch_workspace):
-## arcpy.management.CreateFileGDB(rf"{scratch_folder}\{table_name}", f"scratch")
-## del region_scratch_workspace
-##
-## arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
-## arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
-##
-## # Process: Make Table View (Make Table View) (management)
-## datasets = rf'{project_gdb}\Datasets'
-## arcpy.AddMessage(f"\t{os.path.basename(datasets)} has {arcpy.management.GetCount(datasets)[0]} records")
-##
-## table_name_view = "Dataset Table View"
-## arcpy.management.MakeTableView(in_table = datasets,
-## out_view = table_name_view,
-## where_clause = f"TableName = '{table_name}'"
-## )
-## arcpy.AddMessage(f"\t{table_name_view} has {arcpy.management.GetCount(table_name_view)[0]} records")
-## arcpy.management.CopyRows(table_name_view, rf"{region_gdb}\Datasets")
-## arcpy.AddMessage("\tCopy Rows: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
-##
-## filter_subregion = [row[0] for row in arcpy.da.SearchCursor(rf"{region_gdb}\Datasets", "FilterSubRegion")][0].replace("'", "''")
-##
-## arcpy.management.Delete(table_name_view)
-## del table_name_view
-
-## region_table = rf"{project_gdb}\{table_name}"
-## arcpy.AddMessage(f"\t{os.path.basename(region_table)} has {arcpy.management.GetCount(region_table)[0]} records")
-## # Process: Make Table View (Make Table View) (management)
-## table_name_view = "IDW Table View"
-## arcpy.management.MakeTableView(in_table = region_table,
-## out_view = table_name_view,
-## where_clause = "DistributionProjectName = 'NMFS/Rutgers IDW Interpolation'"
-## )
-## # Process: Copy Rows (Copy Rows) (management)
-## arcpy.AddMessage(f"\t{table_name_view} has {arcpy.management.GetCount(table_name_view)[0]} records")
-## arcpy.management.CopyRows(in_rows = table_name_view, out_table = rf"{region_gdb}\{table_name}")
-## arcpy.AddMessage("\tCopy Rows: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
-##
-## arcpy.management.Delete(table_name_view)
-##
-## # Process: Make Table View (Make Table View) (management)
-## #arcpy.AddMessage(filter_subregion)
-## species_filter = rf"{project_gdb}\Species_Filter"
-## arcpy.AddMessage(f"\t{os.path.basename(species_filter)} has {arcpy.management.GetCount(species_filter)[0]} records")
-## table_name_view = "Species Filter Table View"
-## arcpy.management.MakeTableView(in_table = species_filter,
-## out_view = table_name_view,
-## #where_clause = f"FilterSubRegion = '{filter_subregion}'",
-## where_clause = f"FilterSubRegion = '{filter_subregion}' AND DistributionProjectName = 'NMFS/Rutgers IDW Interpolation'",
-## workspace=region_gdb,
-## field_info="OBJECTID OBJECTID VISIBLE NONE;Species Species VISIBLE NONE;CommonName CommonName VISIBLE NONE;TaxonomicGroup TaxonomicGroup VISIBLE NONE;FilterRegion FilterRegion VISIBLE NONE;FilterSubRegion FilterSubRegion VISIBLE NONE;ManagementBody ManagementBody VISIBLE NONE;ManagementPlan ManagementPlan VISIBLE NONE;DistributionProjectName DistributionProjectName VISIBLE NONE"
-## )
-##
-## arcpy.AddMessage(f"\t{table_name_view} has {arcpy.management.GetCount(table_name_view)[0]} records")
-## arcpy.management.CopyRows(in_rows = table_name_view, out_table = rf"{region_gdb}\Species_Filter")
-## arcpy.AddMessage("\tCopy Rows: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
-##
-## arcpy.management.Delete(table_name_view)
-## del table_name_view
-## #
-## del datasets, region_table, species_filter
-## del filter_subregion
-# # # # Moved to preprocessing
-
# Sequential Processing
if Sequential:
- arcpy.AddMessage(f"Sequential Processing")
+ arcpy.AddMessage("Sequential Processing")
for i in range(0, len(table_names)):
arcpy.AddMessage(f"Processing: {table_names[i]}")
table_name = table_names[i]
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
try:
worker(region_gdb=region_gdb)
- except:
+ except: # noqa: E722
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
sys.exit()
@@ -166,11 +219,11 @@ def director(project_gdb="", Sequential=True, table_names=[]):
# Non-Sequential Processing
if not Sequential:
- arcpy.AddMessage(f"Non-Sequential Processing")
+ arcpy.AddMessage("Non-Sequential Processing")
# Imports
import multiprocessing
from time import time, localtime, strftime, sleep, gmtime
- arcpy.AddMessage(f"Start multiprocessing using the ArcGIS Pro pythonw.exe.")
+ arcpy.AddMessage("Start multiprocessing using the ArcGIS Pro pythonw.exe.")
#Set multiprocessing exe in case we're running as an embedded process, i.e ArcGIS
#get_install_path() uses a registry query to figure out 64bit python exe if available
multiprocessing.set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
@@ -181,17 +234,17 @@ def director(project_gdb="", Sequential=True, table_names=[]):
#Create a pool of workers, keep one cpu free for surfing the net.
#Let each worker process only handle 1 task before being restarted (in case of nasty memory leaks)
with multiprocessing.Pool(processes=_processes, maxtasksperchild=1) as pool:
- arcpy.AddMessage(f"\tPrepare arguments for processing")
+ arcpy.AddMessage("\tPrepare arguments for processing")
# Use apply_async so we can handle exceptions gracefully
jobs={}
for i in range(0, len(table_names)):
try:
arcpy.AddMessage(f"Processing: {table_names[i]}")
table_name = table_names[i]
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
jobs[table_name] = pool.apply_async(worker, [region_gdb])
del table_name, region_gdb
- except:
+ except: # noqa: E722
pool.terminate()
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
@@ -207,8 +260,8 @@ def director(project_gdb="", Sequential=True, table_names=[]):
end_time = time()
elapse_time = end_time - start_time
arcpy.AddMessage(f"\nStart Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
- arcpy.AddMessage(f"Have the workers finished?")
- arcpy.AddMessage(f"Have the workers finished?")
+ arcpy.AddMessage("Have the workers finished?")
+ arcpy.AddMessage("Have the workers finished?")
finish_time = strftime('%a %b %d %I:%M %p', localtime())
time_elapsed = u"Elapsed Time {0} (H:M:S)".format(strftime("%H:%M:%S", gmtime(elapse_time)))
arcpy.AddMessage(f"It's {finish_time}\n{time_elapsed}")
@@ -240,11 +293,11 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del result_completed
del start_time
del all_finished
- arcpy.AddMessage(f"\tClose the process pool")
+ arcpy.AddMessage("\tClose the process pool")
# close the process pool
pool.close()
# wait for all tasks to complete and processes to close
- arcpy.AddMessage(f"\tWait for all tasks to complete and processes to close")
+ arcpy.AddMessage("\tWait for all tasks to complete and processes to close")
pool.join()
# Just in case
pool.terminate()
@@ -252,7 +305,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del jobs
del _processes
del time, multiprocessing, localtime, strftime, sleep, gmtime
- arcpy.AddMessage(f"\tDone with multiprocessing Pool")
+ arcpy.AddMessage("\tDone with multiprocessing Pool")
# Post-Processing
arcpy.AddMessage("Post-Processing Begins")
@@ -269,7 +322,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
del dirpath, dirnames, filenames
del walk
for dataset in datasets:
- datasets_short_path = f"{os.path.basename(os.path.dirname(os.path.dirname(dataset)))}\{os.path.basename(os.path.dirname(dataset))}\{os.path.basename(dataset)}"
+ datasets_short_path = f".. {'/'.join(dataset.split(os.sep)[-4:])}"
dataset_name = os.path.basename(dataset)
region_gdb = os.path.dirname(dataset)
arcpy.AddMessage(f"\tDataset: '{dataset_name}'")
@@ -279,7 +332,7 @@ def director(project_gdb="", Sequential=True, table_names=[]):
arcpy.management.Copy(rf"{region_gdb}\{dataset_name}", rf"{project_gdb}\{dataset_name}")
arcpy.AddMessage("\tCopy: {0} {1}\n".format(dataset_name, arcpy.GetMessages(0).replace("\n", '\n\t')))
- arcpy.AddMessage(f"\t\tUpdating field values to replace None with empty string")
+ arcpy.AddMessage("\t\tUpdating field values to replace None with empty string")
fields = [f.name for f in arcpy.ListFields(rf"{project_gdb}\{dataset_name}") if f.type == "String"]
# Create update cursor for feature class
with arcpy.da.UpdateCursor(rf"{project_gdb}\{dataset_name}", fields) as cursor:
@@ -305,38 +358,26 @@ def director(project_gdb="", Sequential=True, table_names=[]):
# Declared Variables assigned in function
del scratch_folder
# Imports
- del dismap_tools, preprocessing, worker
+ del dismap_tools, worker
# Function Parameters
del project_gdb, Sequential, table_names
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
def script_tool(project_gdb=""):
try:
@@ -347,7 +388,7 @@ def script_tool(project_gdb=""):
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
@@ -371,11 +412,11 @@ def script_tool(project_gdb=""):
#del clear_folder
# Create project scratch workspace, if missing
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
if not arcpy.Exists(scratch_folder):
- os.makedirs(rf"{scratch_folder}")
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
- arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"scratch")
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", "scratch")
del scratch_folder
# Set basic arcpy.env variables
@@ -389,18 +430,12 @@ def script_tool(project_gdb=""):
director(project_gdb=project_gdb, Sequential=True, table_names=["GMEX_IDW", "HI_IDW", "WC_ANN_IDW", "WC_TRI_IDW"])
#director(project_gdb=project_gdb, Sequential=False, table_names=["SEUS_SPR_IDW", "HI_IDW"])
elif not Test:
- pass
- #director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", "EBS_IDW", "ENBS_IDW", "GOA_IDW", "NBS_IDW",])
- #director(project_gdb=project_gdb, Sequential=False, table_names=["HI_IDW", "WC_ANN_IDW", "WC_TRI_IDW",])
- #director(project_gdb=project_gdb, Sequential=False, table_names=["GMEX_IDW", "NEUS_FAL_IDW", "NEUS_SPR_IDW",])
- #director(project_gdb=project_gdb, Sequential=False, table_names=["SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW",])
- #director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW", "NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",])
- director(project_gdb=project_gdb, Sequential=False, table_names=["NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW",])
-
+ director(project_gdb=project_gdb, Sequential=False, table_names=["AI_IDW", "EBS_IDW", "ENBS_IDW", "GMEX_IDW", "GOA_IDW", "HI_IDW", "NBS_IDW",])
+ director(project_gdb=project_gdb, Sequential=False, table_names=["NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW", "WC_ANN_IDW", "WC_TRI_IDW",])
else:
pass
del Test
- except:
+ except: # noqa: E722
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
sys.exit()
@@ -436,48 +471,46 @@ def script_tool(project_gdb=""):
del elapse_time, end_time, start_time
del gmtime, localtime, strftime, time
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
sys.exit()
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
if __name__ == '__main__':
try:
project_gdb = arcpy.GetParameterAsText(0)
if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025\August 1 2025.gdb"
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
else:
pass
+
script_tool(project_gdb)
+
arcpy.SetParameterAsText(1, "Result")
del project_gdb
- except:
- traceback.print_exc()
- else:
- pass
- finally:
- pass
\ No newline at end of file
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/create_species_year_image_name_table_worker.py b/ArcGIS-Analysis-Python/src/dismap_tools/create_species_year_image_name_table_worker.py
index 0628606..ac4f374 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/create_species_year_image_name_table_worker.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/create_species_year_image_name_table_worker.py
@@ -9,13 +9,22 @@
# Copyright: (c) john.f.kennedy 2024
# Licence:
#-------------------------------------------------------------------------------
-import os, sys # built-ins first
+import os
+import sys
import traceback
-
import inspect
import arcpy # third-parties second
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
def print_table(table=""):
try:
""" Print first 5 rows of a table """
@@ -30,17 +39,17 @@ def print_table(table=""):
del row
del desc, fields, oid
del table
- except:
+ except: # noqa: E722
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
sys.exit()
def worker(region_gdb=""):
try:
- # Test if passed workspace exists, if not sys.exit()
- if not arcpy.Exists(rf"{region_gdb}"):
- arcpy.AddError(f"{os.path.basename(region_gdb)} is missing!!")
- sys.exit()
+## # Test if passed workspace exists, if not sys.exit()
+## if not arcpy.Exists(rf"{region_gdb}"):
+## arcpy.AddError(f"{os.path.basename(region_gdb)} is missing!!")
+## sys.exit()
from arcpy import metadata as md
# Import the dismap module to access tools
@@ -91,8 +100,8 @@ def worker(region_gdb=""):
# **********************************************************************
# Start: Create new LayerSpeciesYearImageName table
- arcpy.AddMessage(f"\nDatasets Table\n" )
- datasets_table = rf"{region_gdb}\Datasets"
+ arcpy.AddMessage("\nDatasets Table\n" )
+ datasets_table = rf"{region_gdb}\\Datasets"
datasets_table_fields = [f.name for f in arcpy.ListFields(datasets_table) if f.type not in ['Geometry', 'OID']]
print_table(datasets_table)
#region = [row[0] for row in arcpy.da.SearchCursor(datasets_table, "Region", where_clause = f"TableName = '{table_name}'")][0]
@@ -103,7 +112,7 @@ def worker(region_gdb=""):
# **********************************************************************
# Start: Create new LayerSpeciesYearImageName table
- arcpy.AddMessage(f"\nRegion IDW Table\n" )
+ arcpy.AddMessage("\nRegion IDW Table\n" )
region_table_fields = [f.name for f in arcpy.ListFields(region_table) if f.type not in ['Geometry', 'OID']]
# Get a record count to see if data is present; we don't want to add data
getcount = arcpy.management.GetCount(region_table)[0]
@@ -116,7 +125,7 @@ def worker(region_gdb=""):
# **********************************************************************
# Start: Create new LayerSpeciesYearImageName table
- arcpy.AddMessage(f"\nImage Name Table\n" )
+ arcpy.AddMessage("\nImage Name Table\n" )
layer_species_year_image_name_fields = [f.name for f in arcpy.ListFields(layer_species_year_image_name) if f.type not in ['Geometry', 'OID']]
arcpy.AddMessage(f"Image Name Fields:\n\t{', '.join(layer_species_year_image_name_fields)}")
print_table(layer_species_year_image_name)
@@ -126,7 +135,7 @@ def worker(region_gdb=""):
# **********************************************************************
# Start: Get information from the species filter table to create a
# species filter dictionary
- arcpy.AddMessage(f"\nCreating the Species_Filter dictionary\n" )
+ arcpy.AddMessage("\nCreating the Species_Filter dictionary\n" )
species_filter_table = os.path.join(region_gdb, "Species_Filter")
species_filter_table_fields = [f.name for f in arcpy.ListFields(species_filter_table) if f.type not in ['Geometry', 'OID']]
@@ -163,7 +172,7 @@ def worker(region_gdb=""):
# Image Name Table
# DatasetCode, Region, Season, SummaryProduct, FilterRegion, FilterSubRegion, Species, CommonName, SpeciesCommonName, CommonNameSpecies, TaxonomicGroup, ManagementBody, ManagementPlan, DistributionProjectName, CoreSpecies, Variable, Value, Dimensions, ImageName
- arcpy.AddMessage(f"\nDefining the case fields\n")
+ arcpy.AddMessage("\nDefining the case fields\n")
case_fields = [f for f in layer_species_year_image_name_fields if f in region_table_fields]
arcpy.AddMessage(f"Case Fields:\n\t{', '.join(case_fields)}")
@@ -420,183 +429,29 @@ def worker(region_gdb=""):
arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
- return True
- finally:
- pass
-
-def preprocessing(project_gdb="", table_names="", clear_folder=True):
- try:
- import dismap_tools
-
- arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
- arcpy.SetLogMetadata(True)
- arcpy.SetSeverityLevel(1) # 0—A tool will not throw an exception, even if the tool produces an error or warning.
- # 1—If a tool produces a warning or an error, it will throw an exception.
- # 2—If a tool produces an error, it will throw an exception. This is the default.
- arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
-
- # Set basic arcpy.env variables
- arcpy.env.overwriteOutput = True
- arcpy.env.parallelProcessingFactor = "100%"
-
- # Set varaibales
- project_folder = os.path.dirname(project_gdb)
- scratch_folder = rf"{project_folder}\Scratch"
- scratch_workspace = rf"{project_folder}\Scratch\scratch.gdb"
-
- # Clear Scratch Folder
- #ClearScratchFolder = True
- #if ClearScratchFolder:
- if clear_folder:
- dismap_tools.clear_folder(folder=scratch_folder)
- else:
- pass
- #del ClearScratchFolder
- del clear_folder
-
- arcpy.env.workspace = project_gdb
- arcpy.env.scratchWorkspace = scratch_workspace
- del project_folder, scratch_workspace
-
- if not table_names:
- table_names = [row[0] for row in arcpy.da.SearchCursor(f"{project_gdb}\Datasets",
- "TableName",
- where_clause = "TableName LIKE '%_IDW'")]
- else:
- pass
-
- for table_name in table_names:
- arcpy.AddMessage(f"Pre-Processing: {table_name}")
-
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
- region_scratch_workspace = rf"{scratch_folder}\{table_name}\scratch.gdb"
-
- # Create Scratch Workspace for Region
- if not arcpy.Exists(region_scratch_workspace):
- os.makedirs(rf"{scratch_folder}\{table_name}")
- if not arcpy.Exists(region_scratch_workspace):
- arcpy.management.CreateFileGDB(rf"{scratch_folder}\{table_name}", f"scratch")
- del region_scratch_workspace
-
- arcpy.AddMessage(f"Creating File GDB: {table_name}")
- arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"{table_name}")
- arcpy.AddMessage("\tCreate File GDB: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
-
- # Process: Make Table View (Make Table View) (management)
- datasets = rf'{project_gdb}\Datasets'
- arcpy.AddMessage(f"\t{os.path.basename(datasets)} has {arcpy.management.GetCount(datasets)[0]} records")
-
- table_name_view = "Dataset Table View"
- arcpy.management.MakeTableView(in_table = datasets,
- out_view = table_name_view,
- where_clause = f"TableName = '{table_name}'"
- )
- arcpy.AddMessage(f"\tThe table {table_name_view} has {arcpy.management.GetCount(table_name_view)[0]} records")
- arcpy.management.CopyRows(table_name_view, rf"{region_gdb}\Datasets")
- arcpy.AddMessage("\tCopy Rows: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
-
- filter_region = [row[0] for row in arcpy.da.SearchCursor(rf"{region_gdb}\Datasets", "FilterRegion")][0].replace("'", "''")
- filter_subregion = [row[0] for row in arcpy.da.SearchCursor(rf"{region_gdb}\Datasets", "FilterSubRegion")][0].replace("'", "''")
-
- arcpy.management.Delete(table_name_view)
- del table_name_view
-
- region_table = rf"{project_gdb}\{table_name}"
- arcpy.AddMessage(f"\t{os.path.basename(region_table)} has {arcpy.management.GetCount(region_table)[0]} records")
- # Process: Make Table View (Make Table View) (management)
- table_name_view = "IDW Table View"
- arcpy.management.MakeTableView(in_table = region_table,
- out_view = table_name_view,
- where_clause = "DistributionProjectName = 'NMFS/Rutgers IDW Interpolation'"
- )
- # Process: Copy Rows (Copy Rows) (management)
- arcpy.AddMessage(f"\t{table_name_view} has {arcpy.management.GetCount(table_name_view)[0]} records")
- arcpy.management.CopyRows(in_rows = table_name_view, out_table = rf"{region_gdb}\{table_name}")
- arcpy.AddMessage("\tCopy Rows: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
-
- arcpy.management.Delete(table_name_view)
- del table_name_view
-
- # Process: Make Table View (Make Table View) (management)
- #arcpy.AddMessage(filter_subregion)
- species_filter = rf"{project_gdb}\Species_Filter"
- arcpy.AddMessage(f"\t{os.path.basename(species_filter)} has {arcpy.management.GetCount(species_filter)[0]} records")
- table_name_view = "Species Filter Table View"
- arcpy.management.MakeTableView(in_table = species_filter,
- out_view = table_name_view,
- #where_clause = f"FilterSubRegion = '{filter_subregion}'",
- where_clause = f"FilterSubRegion = '{filter_subregion}' AND DistributionProjectName = 'NMFS/Rutgers IDW Interpolation'",
- workspace=region_gdb,
- field_info="OBJECTID OBJECTID VISIBLE NONE;Species Species VISIBLE NONE;CommonName CommonName VISIBLE NONE;TaxonomicGroup TaxonomicGroup VISIBLE NONE;FilterRegion FilterRegion VISIBLE NONE;FilterSubRegion FilterSubRegion VISIBLE NONE;ManagementBody ManagementBody VISIBLE NONE;ManagementPlan ManagementPlan VISIBLE NONE;DistributionProjectName DistributionProjectName VISIBLE NONE"
- )
-
- arcpy.AddMessage(f"\t{table_name_view} has {arcpy.management.GetCount(table_name_view)[0]} records")
- arcpy.management.CopyRows(in_rows = table_name_view, out_table = rf"{region_gdb}\Species_Filter")
- arcpy.AddMessage("\tCopy Rows: {0}\n".format(arcpy.GetMessages().replace("\n", '\n\t')))
-
- arcpy.management.Delete(table_name_view)
- del table_name_view
- #print(filter_region, filter_subregion)
- #
- del region_table, species_filter
- del datasets, filter_region, filter_subregion
- # Leave so we can block the above code
- # Declared Variables
- del table_name
-
- # Declared Variables
- del scratch_folder, region_gdb
- # Imports
- del dismap_tools
- # Function Parameters
- del project_gdb, table_names
-
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
- except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
+ except: # noqa: E722
arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
traceback.print_exc()
sys.exit()
- else:
+ else: # noqa: E722
# While in development, leave here. For test, move to finally
rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
return True
finally:
pass
def script_tool(project_gdb=""):
try:
- import dismap_tools
+ from create_species_year_image_name_table_director import preprocessing
from time import gmtime, localtime, strftime, time
# Set a start time so that we can see how log things take
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
@@ -613,17 +468,17 @@ def script_tool(project_gdb=""):
## #table_name = "ENBS_IDW"
#table_names = ["NEUS_FAL_IDW", "NEUS_SPR_IDW", "SEUS_FAL_IDW", "SEUS_SPR_IDW", "SEUS_SUM_IDW",]
- table_names = ["NEUS_FAL_IDW"]
+ table_names = ["WC_ANN_IDW"]
preprocessing(project_gdb=project_gdb, table_names=table_names, clear_folder=True)
# Set varaibales
project_folder = os.path.dirname(project_gdb)
- scratch_folder = rf"{project_folder}\Scratch"
+ scratch_folder = os.path.join(project_folder, "Scratch")
del project_folder
for table_name in table_names:
- region_gdb = rf"{scratch_folder}\{table_name}.gdb"
+ region_gdb = os.path.join(scratch_folder, f"{table_name}.gdb")
try:
pass
@@ -640,9 +495,10 @@ def script_tool(project_gdb=""):
# Declared Varaiables
del scratch_folder
# Imports
- del dismap_tools
+
# Function Parameters
del project_gdb
+
# Elapsed time
end_time = time()
elapse_time = end_time - start_time
@@ -658,48 +514,46 @@ def script_tool(project_gdb=""):
del elapse_time, end_time, start_time
del gmtime, localtime, strftime, time
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
if __name__ == '__main__':
try:
project_gdb = arcpy.GetParameterAsText(0)
if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025\August 1 2025.gdb"
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
else:
pass
+
script_tool(project_gdb)
+
arcpy.SetParameterAsText(1, "Result")
+
del project_gdb
- except:
- traceback.print_exc()
- else:
- pass
- finally:
- pass
\ No newline at end of file
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/dev_dismap_tiff_image_archive.py b/ArcGIS-Analysis-Python/src/dismap_tools/dev_dismap_tiff_image_archive.py
new file mode 100644
index 0000000..1eb478c
--- /dev/null
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/dev_dismap_tiff_image_archive.py
@@ -0,0 +1,159 @@
+#---------------------------------------------------------------------------------------
+# Name: module1
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 22/12/2025
+# Copyright: (c) john.f.kennedy 2025
+# Licence:
+#---------------------------------------------------------------------------------------
+import zipfile
+import os
+import sys
+import traceback
+import inspect
+
+import arcpy
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def zip_folder(folder_path="", archive_folder=""):
+ """
+ Creates a zip archive of a given folder and its contents.
+
+ Args:
+ folder_path (str): The path to the folder to be archived.
+ """
+ try:
+ output_zip_path = rf"{archive_folder}\{os.path.basename(folder_path)}.zip"
+ #arcpy.AddMessage(output_zip_path)
+ arcpy.AddMessage(f"\t\t\t\t../{'/'.join(output_zip_path.split(os.sep)[-3:])}")
+
+ with zipfile.ZipFile(output_zip_path, 'w', zipfile.ZIP_DEFLATED) as zipf:
+ for root, dirs, files in os.walk(folder_path):
+ for file in files:
+ file_path = os.path.join(root, file)
+ # Calculate the relative path within the zip archive
+ arcname = os.path.relpath(file_path, folder_path)
+ #arcpy.AddMessage(arcname)
+ zipf.write(file_path, arcname)
+ for dir_name in dirs:
+ # Add empty directories to the archive
+ dir_path = os.path.join(root, dir_name)
+ arcname = os.path.relpath(dir_path, folder_path)
+ #arcpy.AddMessage(arcname)
+ # Ensure directory entries end with a slash in the archive
+ if not arcname.endswith('/'):
+ arcname += '/'
+ zipf.writestr(zipfile.ZipInfo(arcname), '')
+
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except:# noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ sys.exit()
+ return False
+ else:
+ return True
+
+def main(base_folder="", versions="", archive_folder=""):
+ try:
+ import dismap_tools
+ arcpy.AddMessage(f"Home Folder: {os.path.basename(base_folder)} in '{inspect.stack()[0][3]}'")
+ arcpy.AddMessage(f"Versions: {', '.join(versions)} in '{inspect.stack()[0][3]}'")
+
+ for version in versions:
+ image_folder = rf"{base_folder}\{version}\Images"
+ _archive_folder = os.path.join(archive_folder, f"DisMAP_{dismap_tools.date_code(version)}", "results\\raster")
+ #arcpy.AddMessage(f"\tImage Folder: {os.path.basename(image_folder)} in '{inspect.stack()[0][3]}'")
+ arcpy.AddMessage(f"\tImage Folder: {os.path.basename(image_folder)}")
+ arcpy.AddMessage(f"\t\tArchive Folder: {os.path.basename(_archive_folder)}")
+
+ for entry in os.scandir(image_folder):
+ if entry.is_dir():
+ arcpy.AddMessage(f"\t\t\tInput Folder: {os.path.basename(entry.path)}")
+ zip_folder(entry.path, _archive_folder)
+ else:
+ pass
+ del entry
+ del image_folder, version, _archive_folder
+
+ # Delete Declared Varibales
+ # Delete Functions Parameters
+ del base_folder, versions, archive_folder
+ # Imports
+ del dismap_tools
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except:# noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == '__main__':
+ try:
+ base_folder = arcpy.GetParameterAsText(0)
+ versions = arcpy.GetParameterAsText(1)
+ archive_folder = arcpy.GetParameterAsText(2)
+
+ if not base_folder:
+ base_folder = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMap\\ArcGIS-Analysis-Python")
+ else:
+ arcpy.AddMessage(f"Home Folder: {os.path.basename(base_folder)}")
+
+ if not versions:
+ #versions = ["April 1 2023", "July 1 2024", "August 1 2025",]
+ #versions = ["April 1 2023"]
+ versions = ["February 1 2026"]
+ else:
+ arcpy.AddMessage(f"Versions: {', '.join(versions)}")
+
+ if not archive_folder:
+ archive_folder = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMap\\ArcGIS-Analysis-Python\\NCEI Archive")
+ else:
+ arcpy.AddMessage(f"Home Folder: {os.path.basename(base_folder)}")
+
+ result = main(base_folder=base_folder, versions=versions, archive_folder=archive_folder)
+
+ if result:
+ arcpy.SetParameterAsText(3, result)
+ del result
+
+ # Clean-up declared variables
+ del base_folder, versions, archive_folder
+
+ except: # noqa: E722
+ traceback.print_exc()
+ else:
+ pass
+ finally:
+ sys.exit()
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/dev_dismap_vector_archive.py b/ArcGIS-Analysis-Python/src/dismap_tools/dev_dismap_vector_archive.py
new file mode 100644
index 0000000..9acf96b
--- /dev/null
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/dev_dismap_vector_archive.py
@@ -0,0 +1,237 @@
+#---------------------------------------------------------------------------------------
+# Name: module1
+# Purpose:
+#
+# Author: john.f.kennedy
+#
+# Created: 22/12/2025
+# Copyright: (c) john.f.kennedy 2025
+# Licence:
+#---------------------------------------------------------------------------------------
+import zipfile
+import os
+import inspect
+
+import arcpy
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def dis_map_archive_folders(home_folder="", versions="", archive_folder=""):
+ try:
+ import dismap_tools
+
+ arcpy.env.overwriteOutput = True
+
+ #arcpy.AddMessage(f"Home Folder: {os.path.basename(home_folder)} in '{inspect.stack()[0][3]}'")
+ #arcpy.AddMessage(f"Versions: {', '.join(versions)} in '{inspect.stack()[0][3]}'")
+
+ for version in versions:
+ project_gdb = rf"{home_folder}\{version}\{version}.gdb"
+ _archive_folder = os.path.join(archive_folder, f"DisMAP_{dismap_tools.date_code(version)}")
+ #archive_gdb = rf"{_archive_folder}\DisMAP_{dismap_tools.date_code(version)}.gpkg"
+
+ archive_folders = ["initial", "results/vector-tabular/metadata", "results/raster"]
+
+ #arcpy.AddMessage(f"\tProject GDB: {os.path.basename(project_gdb)} in '{inspect.stack()[0][3]}'")
+ #arcpy.AddMessage(f"\tProject GDB: {project_gdb}")
+ #arcpy.AddMessage(f"\t\tArchive Folder: {_archive_folder}")
+ for archiveFolder in archive_folders:
+ archiveFolder_path = os.path.abspath(os.path.join(archive_folder, f"DisMAP_{dismap_tools.date_code(version)}", archiveFolder))
+ #arcpy.AddMessage(f"\t\t\tFolder: {archiveFolder_path}")
+ if not os.path.isdir(archiveFolder_path):
+ os.makedirs(archiveFolder_path)
+ else:
+ pass
+ pass
+
+ del archiveFolder_path, archiveFolder
+ del archive_folders
+
+ del project_gdb, _archive_folder
+ del version
+
+ # Delete Declared Varibales
+ # Delete Functions Parameters
+ del home_folder, versions, archive_folder
+ # Imports
+ del dismap_tools
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except:# noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+def zip_folder(folder_path="", archive_folder=""):
+ """
+ Creates a zip archive of a given folder and its contents.
+
+ Args:
+ folder_path (str): The path to the folder to be archived.
+ """
+ output_zip_path = rf"{archive_folder}\{os.path.basename(folder_path)}.zip"
+
+ with zipfile.ZipFile(output_zip_path, 'w', zipfile.ZIP_DEFLATED) as zipf:
+ for root, dirs, files in os.walk(folder_path):
+ for file in files:
+ file_path = os.path.join(root, file)
+ # Calculate the relative path within the zip archive
+ arcname = os.path.relpath(file_path, folder_path)
+ #arcpy.AddMessage(arcname)
+ zipf.write(file_path, arcname)
+ for dir_name in dirs:
+ # Add empty directories to the archive
+ dir_path = os.path.join(root, dir_name)
+ arcname = os.path.relpath(dir_path, folder_path)
+ #arcpy.AddMessage(arcname)
+ # Ensure directory entries end with a slash in the archive
+ if not arcname.endswith('/'):
+ arcname += '/'
+ zipf.writestr(zipfile.ZipInfo(arcname), '')
+
+
+def main(home_folder="", versions="", archive_folder=""):
+ try:
+ import dismap_tools
+ from arcpy import metadata as md
+
+ arcpy.env.overwriteOutput = True
+
+ dis_map_archive_folders(home_folder=home_folder, versions=versions, archive_folder=archive_folder)
+
+ #archive_folders = ["initial", "results/vector-tabular/metadata", "results/raster"]
+ #archiveFolder_path = os.path.abspath(os.path.join(archive_folder, f"DisMAP_{dismap_tools.date_code(version)}", archiveFolder))
+
+ arcpy.AddMessage(f"Home Folder: {os.path.basename(home_folder)} in '{inspect.stack()[0][3]}'")
+ arcpy.AddMessage(f"Versions: {', '.join(versions)} in '{inspect.stack()[0][3]}'")
+
+ for version in versions:
+ project_gdb = rf"{home_folder}\{version}\{version}.gdb"
+ _archive_folder = os.path.abspath(os.path.join(archive_folder, f"DisMAP_{dismap_tools.date_code(version)}", "results/vector-tabular"))
+ archive_gdb = os.path.abspath(rf"{_archive_folder}\DisMAP_{dismap_tools.date_code(version)}.gpkg")
+
+ #arcpy.AddMessage(f"\tProject GDB: {os.path.basename(project_gdb)} in '{inspect.stack()[0][3]}'")
+ arcpy.AddMessage(f"\tProject GDB: {os.path.basename(project_gdb)}")
+ arcpy.AddMessage(f"\t\tArchive Folder: {os.path.basename(_archive_folder)}")
+
+ arcpy.env.workspace = project_gdb
+
+ arcpy.management.CreateSQLiteDatabase(out_database_name=archive_gdb, spatial_type="GEOPACKAGE")
+
+ archive_tbs = [
+ "DisMAP_Survey_Info",
+ "Indicators",
+ "SpeciesPersistenceIndicatorPercentileBin",
+ "SpeciesPersistenceIndicatorTrend",
+ "Species_Filter",
+ ]
+ # fc_md.exportMetadata("C:\\Users\\john.f.kennedy\\Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\NCEI Archive\fc_md.xml", "ISO19139_GML32", 'REMOVE_ALL_SENSITIVE_INFO')
+ for tb in sorted([tb for tb in arcpy.ListTables("*") if tb in archive_tbs or tb.endswith("_IDW")]):
+ arcpy.AddMessage(f"\t\t\tTable: {tb}")
+ arcpy.management.Copy(rf"{project_gdb}\{tb}", rf"{archive_gdb}\{tb}")
+ tb_md = md.Metadata(rf"{project_gdb}\{tb}")
+ tb_md.exportMetadata(os.path.abspath(os.path.join(archive_folder, f"DisMAP_{dismap_tools.date_code(version)}", f"results/vector-tabular/metadata/{tb}.xml")), "ISO19139", "REMOVE_ALL_SENSITIVE_INFO")
+ del tb_md
+
+ del tb
+ del archive_tbs
+
+ archive_fcs = [
+ "Regions",
+ "Sample_Locations",
+ ]
+
+ for fc in sorted([fc for fc in arcpy.ListFeatureClasses("*") if any(fc.endswith(f"{f}") for f in archive_fcs)]):
+ arcpy.AddMessage(f"\t\t\tFeature Class: {fc}")
+ arcpy.management.Copy(rf"{project_gdb}\{fc}", rf"{archive_gdb}\{fc}")
+ fc_md = md.Metadata(rf"{project_gdb}\{fc}")
+ fc_md.exportMetadata(os.path.abspath(os.path.join(archive_folder, f"DisMAP_{dismap_tools.date_code(version)}", f"results/vector-tabular/metadata/{fc}.xml")), "ISO19139", "REMOVE_ALL_SENSITIVE_INFO")
+ del fc_md
+
+ del fc
+
+ del project_gdb, _archive_folder, archive_gdb
+ del version
+
+ # Delete Declared Varibales
+ # Delete Functions Parameters
+ del home_folder, versions, archive_folder
+ # Imports
+ del dismap_tools, md
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except:# noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
+ else:
+ return True
+
+if __name__ == '__main__':
+ try:
+ home_folder = arcpy.GetParameterAsText(0)
+ versions = arcpy.GetParameterAsText(1)
+ archive_folder = arcpy.GetParameterAsText(2)
+
+ if not home_folder:
+ home_folder = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMap\\ArcGIS-Analysis-Python")
+ else:
+ arcpy.AddMessage(f"Home Folder: {os.path.basename(home_folder)}")
+
+ if not versions:
+ versions = ["February 1 2026"]
+ #versions = ["April 1 2023", "July 1 2024", "August 1 2025", "February 1 2026"]
+ #versions = ["July 1 2024", "August 1 2025",]
+ else:
+ arcpy.AddMessage(f"Versions: {', '.join(versions)}")
+
+ if not archive_folder:
+ archive_folder = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMap\\ArcGIS-Analysis-Python\\NCEI Archive")
+ else:
+ arcpy.AddMessage(f"Home Folder: {os.path.basename(home_folder)}")
+
+ result = main(home_folder=home_folder, versions=versions, archive_folder=archive_folder)
+
+ if result:
+ arcpy.SetParameterAsText(3, result)
+ else:
+ pass
+ del result
+
+ # Clean-up declared variables
+ del home_folder, versions, archive_folder
+
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ else:
+ pass
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/dismap_base_project_setup.py b/ArcGIS-Analysis-Python/src/dismap_tools/dismap_base_project_setup.py
index ee5a8df..d0dce99 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/dismap_base_project_setup.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/dismap_base_project_setup.py
@@ -30,7 +30,7 @@ def script_tool(base_project_folder="", base_project_folders=""):
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- #arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ #arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"{'-' * 80}\n")
@@ -97,7 +97,7 @@ def script_tool(base_project_folder="", base_project_folders=""):
pass
if not base_project_folders:
- base_project_folders = "Bathymetry;Initial Data"
+ base_project_folders = "Bathymetry;Dataset Shapefiles;Initial Data"
else:
pass
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/dismap_project_setup.py b/ArcGIS-Analysis-Python/src/dismap_tools/dismap_project_setup.py
index 05fefea..787bf7d 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/dismap_project_setup.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/dismap_project_setup.py
@@ -9,6 +9,15 @@
import arcpy
import traceback
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
def script_tool(new_project_folder, project_folders):
"""Script code goes below"""
try:
@@ -28,18 +37,21 @@ def script_tool(new_project_folder, project_folders):
arcpy.AddMessage(arcpy.GetMessages())
else:
arcpy.AddMessage(f"Project GDB: {new_project_folder}.gdb exists")
+
if not arcpy.Exists(rf"{home_folder}\{new_project_folder}\Scratch"):
arcpy.AddMessage("Creating the Scratch Folder")
arcpy.management.CreateFolder(rf"{home_folder}\{new_project_folder}", "Scratch")
arcpy.AddMessage(arcpy.GetMessages())
else:
arcpy.AddMessage(f"Scratch Folder: {new_project_folder} exists")
+
if not arcpy.Exists(rf"{home_folder}\{new_project_folder}\Scratch\scratch.gdb"):
arcpy.AddMessage("Creating the Scratch GDB")
arcpy.management.CreateFileGDB(rf"{home_folder}\{new_project_folder}\Scratch", "scratch")
arcpy.AddMessage(arcpy.GetMessages())
else:
arcpy.AddMessage("Scratch GDB Exists")
+
for _project_folder in project_folders.split(";"):
if not arcpy.Exists(rf"{home_folder}\{new_project_folder}\{_project_folder}"):
arcpy.AddMessage(f"Creating Folder: {_project_folder}")
@@ -98,7 +110,7 @@ def script_tool(new_project_folder, project_folders):
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
#raise SystemExit
- except: # noqa: E722
+ except: # noqa: E722 # noqa: E722
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
#raise SystemExit
@@ -128,9 +140,7 @@ def script_tool(new_project_folder, project_folders):
del new_project_folder, project_folders
except: # noqa: E722
- arcpy.AddMessage(arcpy.GetMessages(0))
- traceback.print_exc()
- else:
- pass
- finally:
- pass
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/dismap_tools.py b/ArcGIS-Analysis-Python/src/dismap_tools/dismap_tools.py
index 12f6ed6..bbc4f7e 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/dismap_tools.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/dismap_tools.py
@@ -13,11 +13,19 @@
import os
import sys
import traceback
-#import importlib
import inspect
import arcpy # third-parties second
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
def parse_xml_file_format_and_save(csv_data_folder="", xml_file="", sort=False):
try:
@@ -152,7 +160,7 @@ def add_fields(csv_data_folder="", in_table=""):
arcpy.env.overwriteOutput = True
arcpy.env.parallelProcessingFactor = "100%"
arcpy.env.workspace = project_gdb
- arcpy.env.scratchWorkspace = rf"Scratch\scratch.gdb"
+ arcpy.env.scratchWorkspace = r"Scratch\\scratch.gdb"
arcpy.SetLogMetadata(True)
if "_IDW_Region" in table:
@@ -341,7 +349,7 @@ def basic_metadata(csv_data_folder="", in_table=""):
# set workspace environment
arcpy.env.overwriteOutput = True
arcpy.env.parallelProcessingFactor = "100%"
- arcpy.env.scratchWorkspace = rf"Scratch\scratch.gdb"
+ arcpy.env.scratchWorkspace = rf"Scratch\\scratch.gdb"
arcpy.env.workspace = project_gdb
arcpy.SetLogMetadata(True)
@@ -664,31 +672,24 @@ def clear_folder(folder=""):
# Function Parameter
del folder
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(arcpy.GetMessages(1))
- traceback.print_exc()
- sys.exit()
except arcpy.ExecuteError:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except Exception:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ sys.exit()
+ return False
except: # noqa: E722
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
sys.exit()
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
def compare_metadata_xml(file1="", file2=""):
"""This requires the use of the clone ArcGIS Pro env and the installation of xmldiff."""
@@ -882,7 +883,7 @@ def dataset_title_dict(project_gdb=""):
__datasets_dict = {}
- dataset_codes = {row[0] : [row[1], row[2], row[3], row[4]] for row in arcpy.da.SearchCursor(rf"{project_gdb}\Datasets", ["DatasetCode", "PointFeatureType", "DistributionProjectCode", "Region", "Season"])}
+ dataset_codes = {row[0] : [row[1], row[2], row[3], row[4]] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"), ["DatasetCode", "PointFeatureType", "DistributionProjectCode", "Region", "Season"])}
#for dataset_code in dataset_codes:
# dataset_codes[dataset_code] = [s for s in dataset_codes[dataset_code] if s.strip()]
# #print(f"Dataset Code: {dataset_code}\n\t{dataset_codes[dataset_code]}")
@@ -1682,7 +1683,7 @@ def export_metadata(csv_data_folder="", in_table=""):
# Use all of the cores on the machine.
arcpy.env.parallelProcessingFactor = "100%"
# Set the scratch workspace
- arcpy.env.scratchWorkspace = rf"Scratch\scratch.gdb"
+ arcpy.env.scratchWorkspace = rf"Scratch\\scratch.gdb"
# Set the workspace to the workspace
arcpy.env.workspace = ws
@@ -1853,6 +1854,8 @@ def get_transformation(gsr_wkt="", psr_wkt=""):
def import_metadata(csv_data_folder="", dataset=""):
try:
+ #arcpy.AddMessage(csv_data_folder)
+ #arcpy.AddMessage(dataset)
if len(csv_data_folder) == 0 or len(dataset) == 0:
arcpy.AddError(f"{os.path.basename(csv_data_folder)} or {os.path.basename(dataset)} is empty")
raise SystemExit
@@ -1901,11 +1904,11 @@ def import_metadata(csv_data_folder="", dataset=""):
arcpy.env.overwriteOutput = True
arcpy.env.parallelProcessingFactor = "100%"
arcpy.env.workspace = project_gdb
- arcpy.env.scratchWorkspace = rf"Scratch\scratch.gdb"
+ arcpy.env.scratchWorkspace = os.path.join(os.path.dirname(project_gdb), "Scratch\\scratch.gdb")
arcpy.SetLogMetadata(True)
try:
- arcpy.AddMessage(f"Create Metadata Dictionary")
+ arcpy.AddMessage("Create Metadata Dictionary")
metadata_dictionary = dataset_title_dict(project_gdb)
if metadata_dictionary:
pass
@@ -1991,7 +1994,9 @@ def import_metadata(csv_data_folder="", dataset=""):
else:
# While in development, leave here. For test, move to finally
rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
return True
finally:
pass
@@ -2402,7 +2407,7 @@ def test_bed_1(project_gdb=""):
##
## arcpy.env.overwriteOutput = True
## arcpy.env.parallelProcessingFactor = "100%"
- ## arcpy.env.scratchWorkspace = rf"Scratch\scratch.gdb"
+ ## arcpy.env.scratchWorkspace = rf"Scratch\\scratch.gdb"
## arcpy.env.workspace = gdb
## arcpy.SetLogMetadata(True)
##
@@ -2962,12 +2967,12 @@ def test_bed_2(project=""):
## arcpy.env.overwriteOutput = True
## arcpy.env.parallelProcessingFactor = "100%"
## arcpy.env.workspace = project_gdb
-## arcpy.env.scratchWorkspace = rf"Scratch\scratch.gdb"
+## arcpy.env.scratchWorkspace = rf"Scratch\\scratch.gdb"
## arcpy.SetLogMetadata(True)
##
## metadata_dictionary = dataset_title_dict(project_gdb)
##
-## dataset = rf"{project_gdb}\DisMAP_Regions"
+## dataset = os.path.join(project_gdb, "DisMAP_Regions")
## table = os.path.basename(dataset)
##
## # #arcpy.conversion.FeaturesToJSON(
@@ -3193,7 +3198,7 @@ def script_tool(project_gdb=""):
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"{'-' * 80}\n")
@@ -3233,7 +3238,7 @@ def script_tool(project_gdb=""):
from dev_create_table_definitions_json import get_list_of_table_fields
get_list_of_table_fields(project_gdb)
del get_list_of_table_fields
- csv_data_folder = rf"{project_folder}\CSV_Data"
+ csv_data_folder = os.path.join(project_folder, "CSV_Data")
# First Test
_table_definitions = table_definitions(csv_data_folder, "HI_IDW")
arcpy.AddMessage(_table_definitions)
@@ -3266,7 +3271,7 @@ def script_tool(project_gdb=""):
TestImportMetadata = True
if TestImportMetadata:
- csv_data_folder = rf"{project_folder}\CSV_Data"
+ csv_data_folder = os.path.join(project_folder, "CSV_Data")
#table_name = "Datasets"
#table_name = "Species_Filter"
#table_name = "DisMAP_Survey_Info"
@@ -3332,7 +3337,7 @@ def script_tool(project_gdb=""):
project_gdb = arcpy.GetParameterAsText(0)
if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025\August 1 2025.gdb"
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
else:
pass
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/dismap_version_project_setup.py b/ArcGIS-Analysis-Python/src/dismap_tools/dismap_version_project_setup.py
new file mode 100644
index 0000000..505c732
--- /dev/null
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/dismap_version_project_setup.py
@@ -0,0 +1,154 @@
+"""
+Script documentation
+- Tool parameters are accessed using arcpy.GetParameter() or
+ arcpy.GetParameterAsText()
+- Update derived parameter values using arcpy.SetParameter() or
+ arcpy.SetParameterAsText()
+"""
+import os
+import arcpy
+import traceback
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def script_tool(base_project_folder="", new_project_folder="", project_folders=""):
+ """Script code goes below"""
+ try:
+ arcpy.env.overwriteOutput = True
+ try:
+ aprx = arcpy.mp.ArcGISProject("CURRENT")
+ except: # noqa: E722
+ aprx = arcpy.mp.ArcGISProject(rf"{base_project_folder}\DisMAP.aprx")
+
+ aprx.save()
+ home_folder = aprx.homeFolder
+ if not arcpy.Exists(rf"{home_folder}\{new_project_folder}"):
+ arcpy.AddMessage(f"Creating Home Folder: '{os.path.basename(home_folder)}'")
+ arcpy.management.CreateFolder(home_folder, new_project_folder)
+ arcpy.AddMessage(arcpy.GetMessages())
+ else:
+ arcpy.AddMessage(f"Home Folder: '{os.path.basename(home_folder)}' Exists")
+ if not arcpy.Exists(rf"{home_folder}\{new_project_folder}\{new_project_folder}.gdb"):
+ arcpy.AddMessage(f"Creating Project GDB: '{os.path.basename(home_folder)}.gdb'")
+ arcpy.management.CreateFileGDB(rf"{home_folder}\{new_project_folder}", f"{new_project_folder}")
+ arcpy.AddMessage(arcpy.GetMessages())
+ else:
+ arcpy.AddMessage(f"Project GDB: {new_project_folder}.gdb exists")
+ if not arcpy.Exists(rf"{home_folder}\{new_project_folder}\Scratch"):
+ arcpy.AddMessage("Creating the Scratch Folder")
+ arcpy.management.CreateFolder(rf"{home_folder}\{new_project_folder}", "Scratch")
+ arcpy.AddMessage(arcpy.GetMessages())
+ else:
+ arcpy.AddMessage(f"Scratch Folder: {new_project_folder} exists")
+ if not arcpy.Exists(rf"{home_folder}\{new_project_folder}\Scratch\scratch.gdb"):
+ arcpy.AddMessage("Creating the Scratch GDB")
+ arcpy.management.CreateFileGDB(rf"{home_folder}\{new_project_folder}\Scratch", "scratch")
+ arcpy.AddMessage(arcpy.GetMessages())
+ else:
+ arcpy.AddMessage("Scratch GDB Exists")
+ for _project_folder in project_folders.split(";"):
+ if not arcpy.Exists(rf"{home_folder}\{new_project_folder}\{_project_folder}"):
+ arcpy.AddMessage(f"Creating Folder: {_project_folder}")
+ arcpy.management.CreateFolder(rf"{home_folder}\{new_project_folder}", _project_folder)
+ arcpy.AddMessage(arcpy.GetMessages())
+ else:
+ arcpy.AddMessage(f"Folder: '{_project_folder}' Exists")
+ del _project_folder
+ if not arcpy.Exists(rf"{home_folder}\{new_project_folder}\{new_project_folder}.aprx"):
+ aprx.saveACopy(rf"{home_folder}\{new_project_folder}\{new_project_folder}.aprx")
+ arcpy.AddMessage(arcpy.GetMessages())
+ else:
+ pass
+
+ _aprx = arcpy.mp.ArcGISProject(rf"{home_folder}\{new_project_folder}\{new_project_folder}.aprx")
+ # Remove maps
+ _maps = _aprx.listMaps()
+ if len(_maps) > 0:
+ for _map in _maps:
+ arcpy.AddMessage(_map.name)
+ aprx.deleteItem(_map)
+ del _map
+ del _maps
+ _aprx.save()
+
+ databases = []
+ databases.append({"databasePath": rf"{home_folder}\{new_project_folder}\{new_project_folder}.gdb", "isDefaultDatabase": True})
+ _aprx.updateDatabases(databases)
+ arcpy.AddMessage(f"Databases: {databases}")
+ del databases
+ _aprx.save()
+
+ toolboxes = []
+ toolboxes.append({"toolboxPath": rf"{home_folder}\DisMAP.atbx", "isDefaultToolbox": True})
+ _aprx.updateToolboxes(toolboxes)
+ arcpy.AddMessage(f"Toolboxes: {toolboxes}")
+ del toolboxes
+ _aprx.save()
+ del _aprx
+
+ # Declared variables
+ del home_folder, aprx
+ # Function parameters
+ del new_project_folder, project_folders
+ except arcpy.ExecuteWarning:
+ arcpy.AddWarning(arcpy.GetMessages(1))
+ except arcpy.ExecuteError:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ #raise SystemExit
+ except SystemExit:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ #raise SystemExit
+ except Exception:
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ #raise SystemExit
+ except: # noqa: E722 # noqa: E722
+ arcpy.AddError(arcpy.GetMessages(2))
+ traceback.print_exc()
+ #raise SystemExit
+ else:
+ pass
+ return True
+ finally:
+ pass
+if __name__ == "__main__":
+ try:
+ base_project_folder = arcpy.GetParameterAsText(0)
+ new_project_folder = arcpy.GetParameterAsText(1)
+ project_folders = arcpy.GetParameterAsText(2)
+
+ if not base_project_folder:
+ base_project_folder = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python"
+ else:
+ pass
+
+ if not new_project_folder:
+ new_project_folder = "February 1 2026"
+ else:
+ pass
+
+ if not project_folders:
+ project_folders = "CRFs;CSV_Data;Dataset_Shapefiles;Images;Layers;Metadata_Export;Publish"
+ else:
+ pass
+
+ script_tool(base_project_folder, new_project_folder, project_folders)
+
+ arcpy.SetParameterAsText(3, "Result")
+
+ del base_project_folder, new_project_folder, project_folders
+
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/import_datasets_species_filter_csv_data.py b/ArcGIS-Analysis-Python/src/dismap_tools/import_datasets_species_filter_csv_data.py
index f4ec365..cf7287a 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/import_datasets_species_filter_csv_data.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/import_datasets_species_filter_csv_data.py
@@ -11,6 +11,15 @@
import inspect
import arcpy
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
def get_encoding_index_col(csv_file):
try:
# Imports
@@ -31,7 +40,9 @@ def get_encoding_index_col(csv_file):
# Read the CSV file into a DataFrame
df = pd.read_csv(csv_file, encoding = __encoding, delimiter = ",",)
# Analyze the data types and lengths
- for column in df.columns: dtypes[column] = df[column].dtype; del column
+ for column in df.columns:
+ dtypes[column] = df[column].dtype
+ del column
first_column = list(dtypes.keys())[0]
__index_column = 0 if first_column == "Unnamed: 0" else None
# Declared Variables
@@ -47,15 +58,15 @@ def get_encoding_index_col(csv_file):
traceback.print_exc()
arcpy.AddError(arcpy.GetMessages(2))
raise SystemExit
- except:
+ except: # noqa: E722
traceback.print_exc()
arcpy.AddError(arcpy.GetMessages(2))
raise SystemExit
else:
return __encoding, __index_column
finally:
- if "__encoding" in locals().keys(): del __encoding
- if "__index_column" in locals().keys(): del __index_column
+ pass
+
def worker(project_gdb="", csv_file=""):
try:
# Test if passed workspace exists, if not raise SystemExit
@@ -75,10 +86,10 @@ def worker(project_gdb="", csv_file=""):
table_name = os.path.basename(csv_file).replace(".csv", "")
csv_data_folder = os.path.dirname(csv_file)
project_folder = os.path.dirname(csv_data_folder)
- scratch_workspace = rf"{project_folder}\Scratch\scratch.gdb"
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
# Set basic workkpace variables
arcpy.env.workspace = project_gdb
- arcpy.env.scratchWorkspace = r"Scratch\scratch.gdb"
+ arcpy.env.scratchWorkspace = r"Scratch\\scratch.gdb"
arcpy.env.overwriteOutput = True
arcpy.env.parallelProcessingFactor = "100%"
#arcpy.AddMessage(table_name)
@@ -134,7 +145,7 @@ def worker(project_gdb="", csv_file=""):
arcpy.AddMessage(f">-> Creating the {table_name} Geodatabase Table")
try:
array = np.array(np.rec.fromrecords(df.values), dtype = field_gdb_dtypes)
- except:
+ except: # noqa: E722
traceback.print_exc()
raise SystemExit
del df
@@ -145,7 +156,7 @@ def worker(project_gdb="", csv_file=""):
arcpy.da.NumPyArrayToTable(array, tmp_table)
del array
# Captures ArcPy type of error
- except:
+ except: # noqa: E722
traceback.print_exc()
raise SystemExit
arcpy.AddMessage(f">-> Copying the {table_name} Table from memory to the GDB")
@@ -195,14 +206,16 @@ def worker(project_gdb="", csv_file=""):
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
raise SystemExit
- except:
+ except: # noqa: E722
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
raise SystemExit
else:
# While in development, leave here. For test, move to finally
rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
return True
finally:
pass
@@ -272,14 +285,16 @@ def update_datecode(csv_file="", project_name=""):
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
raise SystemExit
- except:
+ except: # noqa: E722
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
raise SystemExit
else:
# While in development, leave here. For test, move to finally
rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
+ if rk:
+ arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##")
+ del rk
return True
finally:
pass
@@ -289,12 +304,13 @@ def script_tool(project_folder=""):
from lxml import etree
from arcpy import metadata as md
from io import StringIO
+ import dismap_tools
from time import gmtime, localtime, strftime, time
# Set a start time so that we can see how log things take
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"{'-' * 80}\n")
@@ -313,11 +329,11 @@ def script_tool(project_folder=""):
survey_metadata_csv = rf"{csv_data_folder}\DisMAP_Survey_Info.csv"
SpeciesPersistenceIndicatorTrend = rf"{csv_data_folder}\SpeciesPersistenceIndicatorTrend.csv"
SpeciesPersistenceIndicatorPercentileBin = rf"{csv_data_folder}\SpeciesPersistenceIndicatorPercentileBin.csv"
- arcpy.management.Copy(rf"{home_folder}\Datasets\Datasets_20250801.csv", datasets_csv)
- arcpy.management.Copy(rf"{home_folder}\Datasets\Species_Filter_20250801.csv", species_filter_csv)
- arcpy.management.Copy(rf"{home_folder}\Datasets\DisMAP_Survey_Info_20250801.csv", survey_metadata_csv)
- arcpy.management.Copy(rf"{home_folder}\Datasets\SpeciesPersistenceIndicatorTrend_20250801.csv", SpeciesPersistenceIndicatorTrend)
- arcpy.management.Copy(rf"{home_folder}\Datasets\SpeciesPersistenceIndicatorPercentileBin_20250801.csv", SpeciesPersistenceIndicatorPercentileBin)
+ arcpy.management.Copy(rf"{home_folder}\Initial Data\Datasets_{dismap_tools.date_code(project_name)}.csv", datasets_csv)
+ arcpy.management.Copy(rf"{home_folder}\Initial Data\Species_Filter_{dismap_tools.date_code(project_name)}.csv", species_filter_csv)
+ arcpy.management.Copy(rf"{home_folder}\Initial Data\DisMAP_Survey_Info_{dismap_tools.date_code(project_name)}.csv", survey_metadata_csv)
+ arcpy.management.Copy(rf"{home_folder}\Initial Data\SpeciesPersistenceIndicatorTrend_{dismap_tools.date_code(project_name)}.csv", SpeciesPersistenceIndicatorTrend)
+ arcpy.management.Copy(rf"{home_folder}\Initial Data\SpeciesPersistenceIndicatorPercentileBin_{dismap_tools.date_code(project_name)}.csv", SpeciesPersistenceIndicatorPercentileBin)
import json
json_path = rf"{csv_data_folder}\root_dict.json"
with open(json_path, "r") as json_file:
@@ -325,7 +341,7 @@ def script_tool(project_folder=""):
del json_file
del json_path
del json
- contacts = rf"{home_folder}\Datasets\DisMAP Contacts 2025 08 01.xml"
+ contacts = rf"{home_folder}\Datasets\DisMAP Contacts 2026 02 01.xml"
datasets = [datasets_csv, species_filter_csv, survey_metadata_csv, SpeciesPersistenceIndicatorTrend, SpeciesPersistenceIndicatorPercentileBin]
for dataset in datasets:
arcpy.AddMessage(rf"Metadata for: {os.path.basename(dataset)}")
@@ -340,7 +356,7 @@ def script_tool(project_folder=""):
dataset_md.save()
target_tree = etree.parse(StringIO(dataset_md.xml), parser=etree.XMLParser(encoding='UTF-8', remove_blank_text=True))
target_root = target_tree.getroot()
- target_root[:] = sorted(target_root, key=lambda x: root_dict[x.tag])
+ target_root[:] = sorted(target_root, key=lambda x: root_dict[x.tag]) # noqa: F821
new_item_name = target_root.find("Esri/DataProperties/itemProps/itemName").text
#arcpy.AddMessage(new_item_name)
etree.indent(target_root, space=' ')
@@ -352,7 +368,7 @@ def script_tool(project_folder=""):
del dataset_md
del dataset
del datasets
- del project_folder, csv_data_folder
+ del csv_data_folder
#
UpdateDatecode = True
if UpdateDatecode:
@@ -393,9 +409,9 @@ def script_tool(project_folder=""):
# Declared Variables
del contacts, target_tree, target_root, new_item_name, root_dict
# Imports
- del etree, md, StringIO
+ del etree, md, StringIO, dismap_tools
# Function Parameters
- del project_gdb
+ del project_folder
# Elapsed time
end_time = time()
elapse_time = end_time - start_time
@@ -405,52 +421,48 @@ def script_tool(project_folder=""):
arcpy.AddMessage(f"{'-' * 80}")
del elapse_time, end_time, start_time
del gmtime, localtime, strftime, time
- except KeyboardInterrupt:
- raise SystemExit
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(arcpy.GetMessages(1))
+
except arcpy.ExecuteError:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- raise SystemExit
- except SystemExit:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- raise SystemExit
- except Exception:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- raise SystemExit
- except:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- raise SystemExit
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
+
if __name__ == '__main__':
try:
- project_gdb = arcpy.GetParameterAsText(0)
- if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025"
+ project_folder = arcpy.GetParameterAsText(0)
+ if not project_folder:
+ project_folder = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026")
else:
pass
- script_tool(project_gdb)
+ script_tool(project_folder)
+
arcpy.SetParameterAsText(1, "Result")
- del project_gdb
- except SystemExit:
- pass
- except:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- else:
- pass
- finally:
- sys.exit()
+ del project_folder
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
\ No newline at end of file
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/publish_to_portal_director.py b/ArcGIS-Analysis-Python/src/dismap_tools/publish_to_portal_director.py
index d4c3116..c534b04 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/publish_to_portal_director.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/publish_to_portal_director.py
@@ -9,12 +9,22 @@
# Copyright: (c) john.f.kennedy 2024
# Licence:
# -------------------------------------------------------------------------------
-import os, sys # built-ins first
+import os
+import sys
import traceback
-import inspect
import arcpy # third-parties second
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ #filename = sys.path[0] + os.sep + f"{os.path.basename(__file__)}"
+ filename = os.path.basename(__file__)
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
def feature_sharing_draft_report(sd_draft=""):
try:
import xml.dom.minidom as DOM
@@ -24,7 +34,7 @@ def feature_sharing_draft_report(sd_draft=""):
value_list = docs.getElementsByTagName("Value")
for i in range(key_list.length):
- value = f"Value: {value_list[i].firstChild.nodeValue}" if value_list[i].firstChild else f"Value is missing"
+ value = f"Value: {value_list[i].firstChild.nodeValue}" if value_list[i].firstChild else "Value is missing"
arcpy.AddMessage(f"\t\tKey: {key_list[i].firstChild.nodeValue:<45} {value}")
# arcpy.AddMessage(f"\t\tKey: {key_list[i].firstChild.nodeValue:<45} {value[:50]}")
@@ -33,40 +43,28 @@ def feature_sharing_draft_report(sd_draft=""):
del DOM, key_list, value_list, docs
del sd_draft
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
def create_feature_class_layers(project_gdb=""):
try:
# Import
from arcpy import metadata as md
- from dismap_tools import dataset_title_dict, parse_xml_file_format_and_save
+ from dismap_tools import dataset_title_dict, parse_xml_file_format_and_save, clear_folder
# Test if passed workspace exists, if not sys.exit()
if not arcpy.Exists(project_gdb):
@@ -84,18 +82,18 @@ def create_feature_class_layers(project_gdb=""):
project_folder = os.path.dirname(project_gdb)
project_name = os.path.basename(project_folder)
csv_data_folder = rf"{project_folder}\CSV_Data"
- scratch_folder = rf"{project_folder}\Scratch"
- scratch_workspace = rf"{project_folder}\Scratch\scratch.gdb"
+ scratch_folder = os.path.join(project_folder, "Scratch")
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
# Clear Scratch Folder
- dismap_tools.clear_folder(folder=scratch_folder)
+ clear_folder(folder=scratch_folder)
# Create Scratch Workspace for Project
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
if not arcpy.Exists(scratch_folder):
- os.makedirs(rf"{scratch_folder}")
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
- arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"scratch")
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", "scratch")
# Set basic workkpace variables
arcpy.env.workspace = project_gdb
@@ -134,11 +132,11 @@ def create_feature_class_layers(project_gdb=""):
if desc["dataType"] == "FeatureClass":
- arcpy.AddMessage(f"\tMake Feature Layer")
+ arcpy.AddMessage("\tMake Feature Layer")
feature_class_layer = arcpy.management.MakeFeatureLayer(feature_class_path, feature_service_title)
feature_class_layer_file = rf"{project_folder}\Layers\{feature_class_layer}.lyrx"
- arcpy.AddMessage(f"\tSave Layer File")
+ arcpy.AddMessage("\tSave Layer File")
_result = arcpy.management.SaveToLayerFile(
in_layer = feature_class_layer,
out_layer = feature_class_layer_file,
@@ -152,7 +150,7 @@ def create_feature_class_layers(project_gdb=""):
elif desc["dataType"] == "Table":
- arcpy.AddMessage(f"\tMake Table View")
+ arcpy.AddMessage("\tMake Table View")
feature_class_layer = arcpy.management.MakeTableView(
in_table = feature_class_path,
out_view = feature_service_title,
@@ -162,7 +160,7 @@ def create_feature_class_layers(project_gdb=""):
)
feature_class_layer_file = rf"{project_folder}\Layers\{feature_class_layer}.lyrx"
- arcpy.AddMessage(f"\tSave Layer File")
+ arcpy.AddMessage("\tSave Layer File")
_result = arcpy.management.SaveToLayerFile(
in_layer = feature_class_layer,
out_layer = feature_class_layer_file,
@@ -178,7 +176,7 @@ def create_feature_class_layers(project_gdb=""):
pass
if [f.name for f in arcpy.ListFields(feature_class_path) if f.name == "StdTime"]:
- arcpy.AddMessage(f"\tSet Time Enabled if time field is in dataset")
+ arcpy.AddMessage("\tSet Time Enabled if time field is in dataset")
# Get time information from a layer in a layer file
layer_file = arcpy.mp.LayerFile(feature_class_layer_file)
layer = layer_file.listLayers()[0]
@@ -212,7 +210,7 @@ def create_feature_class_layers(project_gdb=""):
del layer
del layer_file
else:
- arcpy.AddMessage(f"\tDataset does not have a time field")
+ arcpy.AddMessage("\tDataset does not have a time field")
layer_file = arcpy.mp.LayerFile(feature_class_layer_file)
@@ -266,7 +264,7 @@ def create_feature_class_layers(project_gdb=""):
aprx.save()
del basemap
- arcpy.AddMessage(f"\t\tCreate map thumbnail and update metadata")
+ arcpy.AddMessage("\t\tCreate map thumbnail and update metadata")
current_map_view = current_map.defaultView
current_map_view.exportToPNG(
rf"{project_folder}\Layers\{feature_service_title}.png",
@@ -300,7 +298,7 @@ def create_feature_class_layers(project_gdb=""):
arcpy.AddMessage(f"\t\tLayer File Path: {layer_file.filePath}")
arcpy.AddMessage(f"\t\tLayer File Version: {layer_file.version}")
- arcpy.AddMessage(f"\t\tLayer File Metadata:")
+ arcpy.AddMessage("\t\tLayer File Metadata:")
arcpy.AddMessage(f"\t\t\tLayer File Title: {layer_file.metadata.title}")
#arcpy.AddMessage(f"\t\t\tLayer File Tags: {layer_file.metadata.tags}")
#arcpy.AddMessage(f"\t\t\tLayer File Summary: {layer_file.metadata.summary}")
@@ -308,13 +306,13 @@ def create_feature_class_layers(project_gdb=""):
#arcpy.AddMessage(f"\t\t\tLayer File Credits: {layer_file.metadata.credits}")
#arcpy.AddMessage(f"\t\t\tLayer File Access Constraints: {layer_file.metadata.accessConstraints}")
- arcpy.AddMessage(f"\t\tList of layers or tables in Layer File:")
+ arcpy.AddMessage("\t\tList of layers or tables in Layer File:")
if current_map.listLayers(feature_service_title):
layer = current_map.listLayers(feature_service_title)[0]
elif current_map.listTables(feature_service_title):
layer = current_map.listTables(feature_service_title)[0]
else:
- arcpy.AddWarning(f"Something wrong")
+ arcpy.AddWarning("Something wrong")
in_md = md.Metadata(feature_class_path)
layer.metadata.copy(in_md)
@@ -324,7 +322,7 @@ def create_feature_class_layers(project_gdb=""):
del in_md
arcpy.AddMessage(f"\t\t\tLayer Name: {layer.name}")
- arcpy.AddMessage(f"\t\t\tLayer Metadata:")
+ arcpy.AddMessage("\t\t\tLayer Metadata:")
arcpy.AddMessage(f"\t\t\t\tLayer Title: {layer.metadata.title}")
#arcpy.AddMessage(f"\t\t\t\tLayer Tags: {layer.metadata.tags}")
#arcpy.AddMessage(f"\t\t\t\tLayer Summary: {layer.metadata.summary}")
@@ -361,34 +359,22 @@ def create_feature_class_layers(project_gdb=""):
# Function Parameters
del project_gdb
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
def create_feature_class_services(project_gdb=""):
try:
@@ -412,15 +398,15 @@ def create_feature_class_services(project_gdb=""):
project_folder = os.path.dirname(project_gdb)
project_name = os.path.basename(project_folder)
csv_data_folder = rf"{project_folder}\CSV_Data"
- scratch_folder = rf"{project_folder}\Scratch"
- scratch_workspace = rf"{project_folder}\Scratch\scratch.gdb"
+ scratch_folder = os.path.join(project_folder, "Scratch")
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
# Create Scratch Workspace for Project
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
if not arcpy.Exists(scratch_folder):
- os.makedirs(rf"{scratch_folder}")
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
- arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"scratch")
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", "scratch")
# Set basic workkpace variables
arcpy.env.workspace = project_gdb
@@ -530,13 +516,13 @@ def create_feature_class_services(project_gdb=""):
del layer_file
- arcpy.AddMessage(f"\t\tList of layers or tables in Layer File:")
+ arcpy.AddMessage("\t\tList of layers or tables in Layer File:")
if current_map.listLayers(feature_service_title):
lyr = current_map.listLayers(feature_service_title)[0]
elif current_map.listTables(feature_service_title):
lyr = current_map.listTables(feature_service_title)[0]
else:
- arcpy.AddWarning(f"Something wrong")
+ arcpy.AddWarning("Something wrong")
in_md = md.Metadata(rf"{project_gdb}\{dataset}")
lyr.metadata.copy(in_md)
@@ -544,7 +530,7 @@ def create_feature_class_services(project_gdb=""):
aprx.save()
del in_md
- arcpy.AddMessage(f"\tGet Web Layer Sharing Draft")
+ arcpy.AddMessage("\tGet Web Layer Sharing Draft")
# Get Web Layer Sharing Draft
server_type = "HOSTING_SERVER" # FEDERATED_SERVER
# m.getWebLayerSharingDraft (server_type, service_type, service_name, {layers_and_tables})
@@ -580,7 +566,7 @@ def create_feature_class_services(project_gdb=""):
#arcpy.AddMessage(f"\t\tTags: {sddraft.tags}")
#arcpy.AddMessage(f"\t\tUse Limitations: {sddraft.useLimitations}")
- arcpy.AddMessage(f"\tExport to SD Draft")
+ arcpy.AddMessage("\tExport to SD Draft")
# Create Service Definition Draft file
sddraft.exportToSDDraft(rf"{project_folder}\Publish\{feature_service}.sddraft")
@@ -588,7 +574,7 @@ def create_feature_class_services(project_gdb=""):
sd_draft = rf"{project_folder}\Publish\{feature_service}.sddraft"
- arcpy.AddMessage(f"\tModify SD Draft")
+ arcpy.AddMessage("\tModify SD Draft")
# https://pro.arcgis.com/en/pro-app/latest/arcpy/sharing/featuresharingdraft-class.htm
# https://www.esri.com/arcgis-blog/products/arcgis-pro/mapping/streamline-your-code-with-new-properties-in-arcpy-sharing
import xml.dom.minidom as DOM
@@ -599,7 +585,7 @@ def create_feature_class_services(project_gdb=""):
for i in range(key_list.length):
if key_list[i].firstChild.nodeValue == "maxRecordCount":
- arcpy.AddMessage(f"\t\tUpdating maxRecordCount from 2000 to 10000")
+ arcpy.AddMessage("\t\tUpdating maxRecordCount from 2000 to 10000")
value_list[i].firstChild.nodeValue = 2000
if key_list[i].firstChild.nodeValue == "ServiceTitle":
arcpy.AddMessage(f"\t\tUpdating ServiceTitle from {value_list[i].firstChild.nodeValue} to {feature_service_title}")
@@ -663,7 +649,7 @@ def create_feature_class_services(project_gdb=""):
current_maps = aprx.listMaps()
if current_maps:
- arcpy.AddMessage(f"\nCurrent Maps\n")
+ arcpy.AddMessage("\nCurrent Maps\n")
for current_map in current_maps:
arcpy.AddMessage(f"\tProject Map: {current_map.name}")
del current_map
@@ -687,34 +673,22 @@ def create_feature_class_services(project_gdb=""):
# Function Parameters
del project_gdb
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
##def update_metadata_from_published_md(project_gdb=""):
## try:
@@ -761,8 +735,8 @@ def create_feature_class_services(project_gdb=""):
## # Get values for table_name from Datasets table
## #fields = ["FeatureClassName", "FeatureServiceName", "FeatureServiceTitle"]
## fields = ["DatasetCode", "PointFeatureType", "FeatureClassName", "Region", "Season", "DateCode", "DistributionProjectCode"]
-## datasets = [row for row in arcpy.da.SearchCursor(rf"{project_gdb}\Datasets", fields, where_clause = f"FeatureClassName IS NOT NULL AND DistributionProjectCode NOT IN ('GLMME', 'GFDL')")]
-## #datasets = [row for row in arcpy.da.SearchCursor(rf"{project_gdb}\Datasets", fields, where_clause = f"FeatureClassName IN ('AI_IDW_Sample_Locations', 'DisMAP_Regions')")]
+## datasets = [row for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"), fields, where_clause = f"FeatureClassName IS NOT NULL AND DistributionProjectCode NOT IN ('GLMME', 'GFDL')")]
+## #datasets = [row for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"), fields, where_clause = f"FeatureClassName IN ('AI_IDW_Sample_Locations', 'DisMAP_Regions')")]
## del fields
##
## for dataset in datasets:
@@ -849,12 +823,6 @@ def create_feature_class_services(project_gdb=""):
def create_image_services(project_gdb=""):
try:
# Import
- import dismap_tools
- from arcpy import metadata as md
-
- # Test if passed workspace exists, if not sys.exit()
- if not arcpy.Exists(base_project_file):
- sys.exit()(f"{os.path.basename(base_project_file)} is missing!!")
# Set History and Metadata logs, set serverity and message level
arcpy.SetLogHistory(True) # Look in %AppData%\Roaming\Esri\ArcGISPro\ArcToolbox\History
@@ -864,26 +832,22 @@ def create_image_services(project_gdb=""):
# 2—If a tool produces an error, it will throw an exception. This is the default.
arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
- aprx = arcpy.mp.ArcGISProject(base_project_file)
- home_folder = aprx.homeFolder
-
- project_gdb = rf"{project_folder}\{project}.gdb"
-
- # Test if passed workspace exists, if not sys.exit()
- if not arcpy.Exists(project_gdb):
- sys.exit()(f"{os.path.basename(project_gdb)} is missing!!")
+ #aprx = arcpy.mp.ArcGISProject(base_project_file) # noqa: F821
+ #home_folder = aprx.homeFolder
+ #project_gdb = rf"{project_folder}\{project}.gdb" # noqa: F821
# Set basic workkpace variables
project_folder = os.path.dirname(project_gdb)
- scratch_folder = rf"{project_folder}\Scratch"
- scratch_workspace = rf"{project_folder}\Scratch\scratch.gdb"
+ crfs_folder = os.path.join(project_folder, "CRFs")
+ scratch_folder = os.path.join(project_folder, "Scratch")
+ scratch_workspace = os.path.join(project_folder, "Scratch\\scratch.gdb")
# Create Scratch Workspace for Project
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
if not arcpy.Exists(scratch_folder):
- os.makedirs(rf"{scratch_folder}")
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
- arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"scratch")
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", "scratch")
# Set basic workkpace variables
arcpy.env.workspace = project_gdb
@@ -893,29 +857,34 @@ def create_image_services(project_gdb=""):
del scratch_folder, scratch_workspace
- LogIntoPortal = False
- if LogIntoPortal:
- try:
- portal = "https://noaa.maps.arcgis.com/"
- user = "John.F.Kennedy_noaa"
-
- #portal = "https://maps.fisheries.noaa.gov/portal/home"
- #portal = "https://maps.fisheries.noaa.gov"
- #user = "John.F.Kennedy_noaa"
+ arcpy.env.workspace = crfs_folder
- # Sign in to portal
- # arcpy.SignInToPortal("https://www.arcgis.com", "MyUserName", "MyPassword")
- # For example: 'http://www.arcgis.com/'
- arcpy.SignInToPortal(portal)
+ for crf in arcpy.ListRasters("*"):
+ arcpy.AddMessage(crf)
- arcpy.AddMessage(f"###---> Signed into Portal: {arcpy.GetActivePortalURL()} <---###")
- del portal, user
- except:
- arcpy.AddError(f"###---> Signed into Portal faild <---###")
- sys.exit()
- del LogIntoPortal
+ arcpy.env.workspace = project_gdb
- arcpy.AddMessage(f"\n{'-' * 90}\n")
+## LogIntoPortal = False
+## if LogIntoPortal:
+## try:
+## portal = "https://noaa.maps.arcgis.com/"
+## user = "John.F.Kennedy_noaa"
+##
+## #portal = "https://maps.fisheries.noaa.gov/portal/home"
+## #portal = "https://maps.fisheries.noaa.gov"
+## #user = "John.F.Kennedy_noaa"
+##
+## # Sign in to portal
+## # arcpy.SignInToPortal("https://www.arcgis.com", "MyUserName", "MyPassword")
+## # For example: 'http://www.arcgis.com/'
+## arcpy.SignInToPortal(portal)
+##
+## arcpy.AddMessage(f"###---> Signed into Portal: {arcpy.GetActivePortalURL()} <---###")
+## del portal, user
+## except: # noqa: E722
+## arcpy.AddError("###---> Signed into Portal faild <---###")
+## sys.exit()
+## del LogIntoPortal
# Publishes an image service to a machine "myserver" from a folder of ortho images
# this code first author a mosaic dataset from the images, then publish it as an image service.
@@ -936,47 +905,29 @@ def create_image_services(project_gdb=""):
#Sddraft = os.path.join(MyWorkspace,Name+".sddraft")
#Sd = os.path.join(MyWorkspace,Name+".sd")
#con = os.path.join(MyWorkspace, "arcgis on myserver_6080 (admin).ags")
- con = r"{os.environ['USERPROFILE']}\Documents\ArcGIS\Projects\DisMAP-ArcGIS-Analysis\server on maps.fisheries.noaa.gov.ags"
- mosiac_name = "SEUS_FAL_Mosaic"
+ con = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis\\image on maps.fisheries.noaa.gov.ags")
+
+ mosiac_name = "SEUS_FAL_IDW_Mosaic"
mosiac_path = rf"{project_gdb}\{mosiac_name}"
mosiac_sddraft = rf"{project_folder}\Publish\{mosiac_name}.sddraft"
-## SrsLookup = {
-## 'Mercator': "PROJCS['World_Mercator',GEOGCS['GCS_WGS_1984',DATUM['D_WGS_1984',SPHEROID['WGS_1984',6378137,298.257223563]],PRIMEM['Greenwich',0],UNIT['Degree',0.017453292519943295]],PROJECTION['Mercator'],PARAMETER['False_Easting',0],PARAMETER['False_Northing',0],PARAMETER['Central_Meridian',0],PARAMETER['Standard_Parallel_1',0],UNIT['Meter',1]]",
-## 'WGS84': "GEOGCS['GCS_WGS_1984',DATUM['D_WGS_1984',SPHEROID['WGS_1984',6378137,298.257223563]],PRIMEM['Greenwich',0],UNIT['Degree',0.017453292519943295]]",
-## 'GZ4': "PROJCS['Germany_Zone_4',GEOGCS['GCS_Deutsches_Hauptdreiecksnetz',DATUM['D_Deutsches_Hauptdreiecksnetz',SPHEROID['Bessel_1841',6377397.155,299.1528128]],PRIMEM['Greenwich',0],UNIT['Degree',0.017453292519943295]],PROJECTION['Transverse_Mercator'],PARAMETER['False_Easting',4500000],PARAMETER['False_Northing',0],PARAMETER['Central_Meridian',12],PARAMETER['Scale_Factor',1],PARAMETER['Latitude_Of_Origin',0],UNIT['Meter',1]]",
-## 'GCS_NAD83': "GEOGCS['GCS_North_American_1983',DATUM['D_North_American_1983',SPHEROID['GRS_1980',6378137,298.257222101]],PRIMEM['Greenwich',0],UNIT['Degree',0.017453292519943295]]",
-## 'PUG': "PROJCS['PUG1',GEOGCS['GCS_North_American_1983',DATUM['D_North_American_1983',SPHEROID['GRS_1980',6378137.0,298.257222101]],PRIMEM['Greenwich',0.0],UNIT['Degree',0.0174532925199433]],PROJECTION['Transverse_Mercator'],PARAMETER['False_Easting',1640416.666666667],PARAMETER['False_Northing',0.0],PARAMETER['Central_Meridian',-87.0],PARAMETER['Scale_Factor',0.9996],PARAMETER['Latitude_Of_Origin',0.0],UNIT['Foot_US',0.3048006096012192]]",
-## 'Florida_East': "PROJCS['NAD_1983_StatePlane_Florida_East_FIPS_0901_Feet',GEOGCS['GCS_North_American_1983',DATUM['D_North_American_1983',SPHEROID['GRS_1980',6378137,298.257222101]],PRIMEM['Greenwich',0],UNIT['Degree',0.0174532925199432955]],PROJECTION['Transverse_Mercator'],PARAMETER['False_Easting',656166.6666666665],PARAMETER['False_Northing',0],PARAMETER['Central_Meridian',-81],PARAMETER['Scale_Factor',0.9999411764705882],PARAMETER['Latitude_Of_Origin',24.33333333333333],UNIT['Foot_US',0.304800609601219241]]",
-## 'SoCalNad83': "PROJCS['NAD_1983_StatePlane_California_V_FIPS_0405',GEOGCS['GCS_North_American_1983',DATUM['D_North_American_1983',SPHEROID['GRS_1980',6378137,298.257222101]],PRIMEM['Greenwich',0],UNIT['Degree',0.0174532925199432955]],PROJECTION['Lambert_Conformal_Conic'],PARAMETER['False_Easting',2000000],PARAMETER['False_Northing',500000],PARAMETER['Central_Meridian',-118],PARAMETER['Standard_Parallel_1',34.03333333333333],PARAMETER['Standard_Parallel_2',35.46666666666667],PARAMETER['Latitude_Of_Origin',33.5],UNIT['Meter',1]]"
-## }
-
-## # First author a mosaic dataset from a folder of images
-## try:
-## arcpy.AddMessage("Creating fgdb")
-## arcpy.CreateFileGDB_management(MyWorkspace, GdbName)
-##
-## arcpy.AddMessage("Creating mosaic dataset")
-## #arcpy.CreateMosaicDataset_management(GDBpath, Name, SrsLookup['Mercator'], "", "", "NONE", "")
-## arcpy.CreateMosaicDataset_management(project_gdb, mosiac_name, SrsLookup['Mercator'], "", "", "NONE", "")
-##
-## arcpy.AddMessage("Adding images to mosaic dataset") # also caculate cell size range, build boundary, and build overviews
-## #arcpy.AddRastersToMosaicDataset_management(Md, "Raster Dataset", ImageSource, "UPDATE_CELL_SIZES", "UPDATE_BOUNDARY", "UPDATE_OVERVIEWS", "#", "0", "1500", "#", "#", "SUBFOLDERS", "ALLOW_DUPLICATES", "NO_PYRAMIDS", "NO_STATISTICS", "NO_THUMBNAILS", "", "NO_FORCE_SPATIAL_REFERENCE")
-## arcpy.AddRastersToMosaicDataset_management(mosiac_path, "Raster Dataset", ImageSource, "UPDATE_CELL_SIZES", "UPDATE_BOUNDARY", "UPDATE_OVERVIEWS", "#", "0", "1500", "#", "#", "SUBFOLDERS", "ALLOW_DUPLICATES", "NO_PYRAMIDS", "NO_STATISTICS", "NO_THUMBNAILS", "", "NO_FORCE_SPATIAL_REFERENCE")
-## except:
-## arcpy.AddError(arcpy.GetMessages()+ "\n\n")
-## sys.exit("Failed in authoring a mosaic dataset")
# Create service definition draft
try:
arcpy.AddMessage("Creating SD draft")
#arcpy.CreateImageSDDraft(Md, Sddraft, Name, 'ARCGIS_SERVER', con, False, None, "Ortho Images","ortho images,image service")
- arcpy.CreateImageSDDraft(mosiac_path, mosiac_sddraft, mosiac_name, 'ARCGIS_SERVER', con, False, None, "Ortho Images", "ortho images,image service")
- except:
- arcpy.AddError(arcpy.GetMessages()+ "\n\n")
- sys.exit("Failed in creating SD draft")
-
+ arcpy.CreateImageSDDraft(mosiac_path, mosiac_sddraft, mosiac_name, 'ARCGIS_SERVER', con, False, None, "Biomass Rasters", "biomass rasters,image service")
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ import traceback
+ traceback.print_exc()
+ return False
## # Analyze the service definition draft
## analysis = arcpy.mapping.AnalyzeForSD(Sddraft)
## arcpy.AddMessage("The following information was returned during analysis of the image service:")
@@ -1010,1376 +961,44 @@ def create_image_services(project_gdb=""):
## arcpy.AddError("Service could not be published because errors were found during analysis.")
## arcpy.AddError(arcpy.GetMessages(2))
- arcpy.AddMessage(f"\n{'-' * 90}\n")
-
- del project_gdb
+ #del project_gdb
# Declared Variables set in function for aprx
- del home_folder
+ #del home_folder
# Save aprx one more time and then delete
- aprx.save()
- del aprx
+ #aprx.save()
+ #del aprx
# Declared Variables set in function
# Imports
# Function Parameters
- del base_project_file, project
+ del project_gdb
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except Exception:
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
-
-##def create_basic_template_xml_files(project_gdb=""):
-## try:
-## # Import
-## from arcpy import metadata as md
-## from dismap_tools import dataset_title_dict, parse_xml_file_format_and_save
-##
-## arcpy.env.overwriteOutput = True
-## arcpy.env.parallelProcessingFactor = "100%"
-## arcpy.SetLogMetadata(True)
-## arcpy.SetSeverityLevel(2)
-## arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
-##
-## # Map Cleanup
-## MapCleanup = False
-## if MapCleanup:
-## map_cleanup(base_project_file)
-## del MapCleanup
-##
-## base_project_folder = rf"{os.path.dirname(base_project_file)}"
-## base_project_file = rf"{base_project_folder}\DisMAP.aprx"
-## project_folder = rf"{base_project_folder}\{project}"
-## project_gdb = rf"{project_folder}\{project}.gdb"
-## metadata_folder = rf"{project_folder}\Export Metadata"
-## crfs_folder = rf"{project_folder}\CRFs"
-## scratch_folder = rf"{project_folder}\Scratch"
-##
-## metadata_dictionary = dataset_title_dict(project_gdb)
-##
-## workspaces = [project_gdb, crfs_folder]
-##
-## for workspace in workspaces:
-##
-## arcpy.env.workspace = workspace
-## arcpy.env.scratchWorkspace = rf"{scratch_folder}\scratch.gdb"
-##
-## datasets = list()
-##
-## walk = arcpy.da.Walk(workspace)
-##
-## for dirpath, dirnames, filenames in walk:
-## for filename in filenames:
-## datasets.append(os.path.join(dirpath, filename))
-## del filename
-## del dirpath, dirnames, filenames
-## del walk
-##
-## for dataset_path in sorted(datasets):
-## #arcpy.AddMessage(dataset_path)
-## dataset_name = os.path.basename(dataset_path)
-##
-## arcpy.AddMessage(f"Dataset Name: {dataset_name}")
-##
-## if "Datasets" == dataset_name:
-##
-## arcpy.AddMessage(f"\tDataset Table")
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## del empty_md
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## datasets_table_template = rf"{project_folder}\Metadata_Export\datasets_table_template.xml"
-## dataset_md.saveAsXML(datasets_table_template)
-## parse_xml_file_format_and_save(csv_data_folder=csv_data_folder, xml_file=datasets_table_template, sort=True)
-## del datasets_table_template
-##
-## del dataset_md
-##
-## elif "Species_Filter" == dataset_name:
-##
-## arcpy.AddMessage(f"\tSpecies Filter Table")
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## del empty_md
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-##
-## dataset_md.synchronize("ALWAYS")
-##
-## species_filter_table_template = rf"{metadata_folder}\species_filter_table_template.xml"
-## dataset_md.saveAsXML(species_filter_table_template)
-## parse_xml_file_format_and_save(csv_data_folder=csv_data_folder, xml_file=species_filter_table_template, sort=True)
-## del species_filter_table_template
-##
-## del dataset_md
-##
-## elif "Indicators" in dataset_name:
-##
-## arcpy.AddMessage(f"\tIndicators")
-##
-## if dataset_name == "Indicators":
-## dataset_name = f"{dataset_name}_Table"
-## else:
-## pass
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## del empty_md
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-##
-## dataset_md.synchronize("ALWAYS")
-##
-## indicators_template = rf"{metadata_folder}\indicators_template.xml"
-## dataset_md.saveAsXML(indicators_template)
-## parse_xml_file_format_and_save(indicators_template)
-## parse_xml_file_format_and_save(csv_data_folder=csv_data_folder, xml_file=indicators_template, sort=True)
-## del indicators_template
-##
-## del dataset_md
-##
-## elif "LayerSpeciesYearImageName" in dataset_name:
-##
-## arcpy.AddMessage(f"\tLayer Species Year Image Name")
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## del empty_md
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-##
-## dataset_md.synchronize("ALWAYS")
-##
-## layer_species_year_image_name_template = rf"{metadata_folder}\layer_species_year_image_name_template.xml"
-## dataset_md.saveAsXML(layer_species_year_image_name_template)
-## parse_xml_file_format_and_save(csv_data_folder=csv_data_folder, xml_file=layer_species_year_image_name_template, sort=True)
-## del layer_species_year_image_name_template
-##
-## del dataset_md
-##
-## elif dataset_name.endswith("Boundary"):
-##
-## arcpy.AddMessage(f"\tBoundary")
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## del empty_md
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-##
-## dataset_md.synchronize("ALWAYS")
-##
-## boundary_template = rf"{metadata_folder}\boundary_template.xml"
-## dataset_md.saveAsXML(boundary_template)
-## parse_xml_file_format_and_save(csv_data_folder=csv_data_folder, xml_file=boundary_template, sort=True)
-## del boundary_template
-##
-## del dataset_md
-##
-## elif dataset_name.endswith("Extent_Points"):
-##
-## arcpy.AddMessage(f"\tExtent_Points")
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## del empty_md
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-## extent_points_template = rf"{metadata_folder}\extent_points_template.xml"
-## dataset_md.saveAsXML(extent_points_template)
-## parse_xml_file_format_and_save(csv_data_folder=csv_data_folder, xml_file=extent_points_template, sort=True)
-## del extent_points_template
-##
-## del dataset_md
-##
-## elif dataset_name.endswith("Fishnet"):
-##
-## arcpy.AddMessage(f"\tFishnet")
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## del empty_md
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## fishnet_template = rf"{metadata_folder}\fishnet_template.xml"
-## dataset_md.saveAsXML(fishnet_template)
-## parse_xml_file_format_and_save(fishnet_template)
-## del fishnet_template
-##
-## del dataset_md
-##
-## elif dataset_name.endswith("Lat_Long"):
-##
-## arcpy.AddMessage(f"\tLat_Long")
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## del empty_md
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## lat_long_template = rf"{metadata_folder}\lat_long_template.xml"
-## dataset_md.saveAsXML(lat_long_template)
-## parse_xml_file_format_and_save(lat_long_template)
-## del lat_long_template
-##
-## del dataset_md
-##
-## elif dataset_name.endswith("Region"):
-##
-## arcpy.AddMessage(f"\tRegion")
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## del empty_md
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## region_template = rf"{metadata_folder}\region_template.xml"
-## dataset_md.saveAsXML(region_template)
-## parse_xml_file_format_and_save(region_template)
-## del region_template
-##
-## del dataset_md
-##
-## elif dataset_name.endswith("Sample_Locations"):
-##
-## arcpy.AddMessage(f"\tSample_Locations")
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## del empty_md
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## sample_locations_template = rf"{metadata_folder}\sample_locations_template.xml"
-## dataset_md.saveAsXML(sample_locations_template)
-## parse_xml_file_format_and_save(sample_locations_template)
-## del sample_locations_template
-##
-## del dataset_md
-##
-## elif dataset_name.endswith("GRID_Points"):
-##
-## arcpy.AddMessage(f"\tGRID_Points")
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## del empty_md
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## grid_points_template = rf"{metadata_folder}\grid_points_template.xml"
-## dataset_md.saveAsXML(grid_points_template)
-## parse_xml_file_format_and_save(grid_points_template)
-## del grid_points_template
-##
-## del dataset_md
-##
-## elif "DisMAP_Regions" == dataset_name:
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## del empty_md
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## dismap_regions_template = rf"{metadata_folder}\dismap_regions_template.xml"
-## dataset_md.saveAsXML(dismap_regions_template)
-## parse_xml_file_format_and_save(dismap_regions_template)
-## del dismap_regions_template
-##
-## del dataset_md
-##
-## elif dataset_name.endswith("Bathymetry"):
-##
-## arcpy.AddMessage(f"\tBathymetry")
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## del empty_md
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## bathymetry_template = rf"{metadata_folder}\bathymetry_template.xml"
-## dataset_md.saveAsXML(bathymetry_template)
-## parse_xml_file_format_and_save(bathymetry_template)
-## del bathymetry_template
-##
-## del dataset_md
-##
-## elif dataset_name.endswith("Latitude"):
-##
-## arcpy.AddMessage(f"\tLatitude")
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## del empty_md
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## latitude_template = rf"{metadata_folder}\latitude_template.xml"
-## dataset_md.saveAsXML(latitude_template)
-## parse_xml_file_format_and_save(latitude_template)
-## del latitude_template
-##
-## del dataset_md
-##
-## elif dataset_name.endswith("Longitude"):
-##
-## arcpy.AddMessage(f"\tLongitude")
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## del empty_md
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## longitude_template = rf"{metadata_folder}\longitude_template.xml"
-## dataset_md.saveAsXML(longitude_template)
-## parse_xml_file_format_and_save(longitude_template)
-## del longitude_template
-##
-## del dataset_md
-##
-## elif dataset_name.endswith("Raster_Mask"):
-##
-## arcpy.AddMessage(f"\tRaster_Mask")
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## del empty_md
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## raster_mask_template = rf"{metadata_folder}\raster_mask_template.xml"
-## dataset_md.saveAsXML(raster_mask_template)
-## parse_xml_file_format_and_save(raster_mask_template)
-## del raster_mask_template
-##
-## del dataset_md
-##
-## elif dataset_name.endswith("Mosaic"):
-##
-## arcpy.AddMessage(f"\tMosaic")
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## del empty_md
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## mosaic_template = rf"{metadata_folder}\mosaic_template.xml"
-## dataset_md.saveAsXML(mosaic_template)
-## parse_xml_file_format_and_save(mosaic_template)
-## del mosaic_template
-##
-## del dataset_md
-##
-## elif dataset_name.endswith(".crf"):
-##
-## arcpy.AddMessage(f"\tCRF")
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## del empty_md
-##
-## dataset_md.title = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## crf_template = rf"{metadata_folder}\crf_template.xml"
-## dataset_md.saveAsXML(crf_template)
-## parse_xml_file_format_and_save(crf_template)
-## del crf_template
-##
-## del dataset_md
-##
-## else:
-## arcpy.AddMessage(f"\tRegion Table")
-##
-## if dataset_name.endswith("IDW"):
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## del empty_md
-##
-## dataset_md.title = metadata_dictionary[f"{dataset_name}"]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[f"{dataset_name}"]["Tags"]
-## dataset_md.summary = metadata_dictionary[f"{dataset_name}"]["Summary"]
-## dataset_md.description = metadata_dictionary[f"{dataset_name}"]["Description"]
-## dataset_md.credits = metadata_dictionary[f"{dataset_name}"]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[f"{dataset_name}"]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## idw_region_table_template = rf"{metadata_folder}\idw_region_table_template.xml"
-## dataset_md.saveAsXML(idw_region_table_template)
-## parse_xml_file_format_and_save(idw_region_table_template)
-## del idw_region_table_template
-##
-## del dataset_md
-## else:
-## pass
-## del dataset_name, dataset_path
-## del workspace
-##
-## del datasets
-##
-## # Declared Variables set in function
-## del project_gdb, base_project_folder, metadata_folder
-## del project_folder, scratch_folder, crfs_folder
-## del metadata_dictionary, workspaces
-##
-## # Imports
-## del dataset_title_dict, parse_xml_file_format_and_save
-## del md
-##
-## # Function Parameters
-## del base_project_file, project
-##
-## except KeyboardInterrupt:
-## sys.exit()
-## except arcpy.ExecuteWarning:
-## arcpy.AddWarning(arcpy.GetMessages(1))
-## except arcpy.ExecuteError:
-## arcpy.AddError(arcpy.GetMessages(2))
-## traceback.print_exc()
-## sys.exit()
-## except SystemExit:
-## arcpy.AddError(arcpy.GetMessages(2))
-## traceback.print_exc()
-## sys.exit()
-## except Exception:
-## arcpy.AddError(arcpy.GetMessages(2))
-## traceback.print_exc()
-## sys.exit()
-## except:
-## arcpy.AddError(arcpy.GetMessages(2))
-## traceback.print_exc()
-## sys.exit()
-## else:
-## # While in development, leave here. For test, move to finally
-## rk = [key for key in locals().keys() if not key.startswith('__')]
-## if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
-## return True
-## finally:
-## pass
-##
-##def import_basic_template_xml_files(project_gdb=""):
-## try:
-## # Import
-## from arcpy import metadata as md
-##
-## from dismap_tools import dataset_title_dict, parse_xml_file_format_and_saves, unique_years
-##
-## arcpy.env.overwriteOutput = True
-## arcpy.env.parallelProcessingFactor = "100%"
-## arcpy.SetLogMetadata(True)
-## arcpy.SetSeverityLevel(2)
-## arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
-##
-## # Map Cleanup
-## MapCleanup = False
-## if MapCleanup:
-## map_cleanup(base_project_file)
-## del MapCleanup
-##
-## base_project_folder = rf"{os.path.dirname(base_project_file)}"
-## base_project_file = rf"{base_project_folder}\DisMAP.aprx"
-## project_folder = rf"{base_project_folder}\{project}"
-## project_gdb = rf"{project_folder}\{project}.gdb"
-## metadata_folder = rf"{project_folder}\Current Metadata"
-## crfs_folder = rf"{project_folder}\CRFs"
-## scratch_folder = rf"{project_folder}\Scratch"
-##
-## #arcpy.AddMessage("Creating the Metadata Dictionary. Please wait!!")
-## metadata_dictionary = dataset_title_dict(project_gdb)
-## #arcpy.AddMessage("Creating the Metadata Dictionary. Completed")
-##
-## #workspaces = [project_gdb, crfs_folder]
-## workspaces = [crfs_folder]
-##
-## for workspace in workspaces:
-##
-## arcpy.env.workspace = workspace
-## arcpy.env.scratchWorkspace = rf"{scratch_folder}\scratch.gdb"
-##
-## datasets = list()
-##
-## walk = arcpy.da.Walk(workspace)
-##
-## for dirpath, dirnames, filenames in walk:
-## for filename in filenames:
-## datasets.append(os.path.join(dirpath, filename))
-## del filename
-## del dirpath, dirnames, filenames
-## del walk
-##
-## for dataset_path in sorted(datasets):
-## #arcpy.AddMessage(dataset_path)
-## dataset_name = os.path.basename(dataset_path)
-##
-## arcpy.AddMessage(f"Dataset Name: {dataset_name}")
-##
-## if "Datasets" == dataset_name:
-##
-## arcpy.AddMessage(f"\tDataset Table")
-##
-## datasets_table_template = rf"{metadata_folder}\datasets_table_template.xml"
-## template_md = md.Metadata(datasets_table_template)
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## dataset_md.copy(template_md)
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## del empty_md, template_md, datasets_table_template
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## del dataset_md
-##
-## elif "Species_Filter" == dataset_name:
-##
-## arcpy.AddMessage(f"\tSpecies Filter Table")
-##
-## species_filter_table_template = rf"{metadata_folder}\species_filter_table_template.xml"
-## template_md = md.Metadata(species_filter_table_template)
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## dataset_md.copy(template_md)
-## dataset_md.save()
-## del empty_md, template_md, species_filter_table_template
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## del dataset_md
-##
-## elif "Indicators" in dataset_name:
-##
-## arcpy.AddMessage(f"\tIndicators")
-##
-## if dataset_name == "Indicators":
-## indicators_template = rf"{metadata_folder}\indicators_template.xml"
-## else:
-## indicators_template = rf"{metadata_folder}\region_indicators_template.xml"
-##
-## template_md = md.Metadata(indicators_template)
-##
-## dataset_md = md.Metadata(dataset_path)
-## dataset_md.copy(template_md)
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-## del empty_md, template_md, indicators_template
-##
-## # Max-Min Year range table
-## years_md = unique_years(dataset_path)
-## _tags = f", {min(years_md)} to {max(years_md)}"
-## del years_md
-##
-## #arcpy.AddMessage(metadata_dictionary[dataset_name]["Tags"])
-## #arcpy.AddMessage(_tags)
-##
-## if dataset_name == "Indicators":
-## dataset_name = f"{dataset_name}_Table"
-## else:
-## pass
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"] + _tags
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## del dataset_md, _tags
-##
-## elif "LayerSpeciesYearImageName" in dataset_name:
-##
-## arcpy.AddMessage(f"\tLayer Species Year Image Name")
-##
-## layer_species_year_image_name_template = rf"{metadata_folder}\layer_species_year_image_name_template.xml"
-## template_md = md.Metadata(layer_species_year_image_name_template)
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## dataset_md.copy(template_md)
-## dataset_md.save()
-## del empty_md, template_md, layer_species_year_image_name_template
-##
-## # Max-Min Year range table
-## years_md = unique_years(dataset_path)
-## _tags = f", {min(years_md)} to {max(years_md)}"
-## del years_md
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"] + _tags
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## del dataset_md, _tags
-##
-## elif dataset_name.endswith("Boundary"):
-##
-## arcpy.AddMessage(f"\tBoundary")
-##
-## boundary_template = rf"{metadata_folder}\boundary_template.xml"
-## template_md = md.Metadata(boundary_template)
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## dataset_md.copy(template_md)
-## dataset_md.save()
-## del empty_md, template_md, boundary_template
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## del dataset_md
-##
-## elif dataset_name.endswith("Extent_Points"):
-##
-## arcpy.AddMessage(f"\tExtent_Points")
-##
-## extent_points_template = rf"{metadata_folder}\extent_points_template.xml"
-## template_md = md.Metadata(extent_points_template)
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## dataset_md.copy(template_md)
-## dataset_md.save()
-## del empty_md, template_md, extent_points_template
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## del dataset_md
-##
-## elif dataset_name.endswith("Fishnet"):
-##
-## arcpy.AddMessage(f"\tFishnet")
-##
-## fishnet_template = rf"{metadata_folder}\fishnet_template.xml"
-## template_md = md.Metadata(fishnet_template)
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## dataset_md.copy(template_md)
-## dataset_md.save()
-## del empty_md, template_md, fishnet_template
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## del dataset_md
-##
-## elif dataset_name.endswith("Lat_Long"):
-##
-## arcpy.AddMessage(f"\tLat_Long")
-##
-## lat_long_template = rf"{metadata_folder}\lat_long_template.xml"
-## template_md = md.Metadata(lat_long_template)
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## dataset_md.copy(template_md)
-## dataset_md.save()
-## del empty_md, template_md, lat_long_template
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## del dataset_md
-##
-## elif dataset_name.endswith("Region"):
-##
-## arcpy.AddMessage(f"\tRegion")
-##
-## region_template = rf"{metadata_folder}\region_template.xml"
-## template_md = md.Metadata(region_template)
-##
-## dataset_md = md.Metadata(dataset_path)
-## dataset_md.copy(template_md)
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## del template_md, region_template
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## del dataset_md
-##
-## elif dataset_name.endswith("Sample_Locations"):
-##
-## arcpy.AddMessage(f"\tSample_Locations")
-##
-## sample_locations_template = rf"{metadata_folder}\sample_locations_template.xml"
-## template_md = md.Metadata(sample_locations_template)
-##
-## dataset_md = md.Metadata(dataset_path)
-## dataset_md.copy(template_md)
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-## del template_md, sample_locations_template
-##
-## # Max-Min Year range table
-## years_md = unique_years(dataset_path)
-## _tags = f", {min(years_md)} to {max(years_md)}"
-## del years_md
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"] + _tags
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## del dataset_md, _tags
-##
-#### elif dataset_name.endswith("GRID_Points"):
-####
-#### arcpy.AddMessage(f"\tGRID_Points")
-####
-#### grid_points_template = rf"{metadata_folder}\grid_points_template.xml"
-#### template_md = md.Metadata(grid_points_template)
-####
-#### dataset_md = md.Metadata(dataset_path)
-#### empty_md = md.Metadata()
-#### dataset_md.copy(empty_md)
-#### dataset_md.save()
-#### dataset_md.copy(template_md)
-#### dataset_md.save()
-#### del empty_md, template_md, grid_points_template
-####
-#### # Max-Min Year range table
-#### years_md = unique_years(dataset_path)
-#### _tags = f", {min(years_md)} to {max(years_md)}"
-#### del years_md
-####
-#### dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-#### dataset_md.tags = metadata_dictionary[dataset_name]["Tags"] + _tags
-#### dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-#### dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-#### dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-#### dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-#### dataset_md.save()
-####
-#### dataset_md.synchronize("ALWAYS")
-####
-#### del dataset_md, _tags
-##
-## elif "DisMAP_Regions" == dataset_name:
-##
-## arcpy.AddMessage(f"\tDisMAP_Regions")
-##
-## dismap_regions_template = rf"{metadata_folder}\dismap_regions_template.xml"
-## template_md = md.Metadata(dismap_regions_template)
-##
-## dataset_md = md.Metadata(dataset_path)
-## dataset_md.copy(template_md)
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## del template_md, dismap_regions_template
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-## dataset_md.synchronize("ALWAYS")
-## dataset_md.save()
-##
-## del dataset_md
-##
-## elif dataset_name.endswith("Bathymetry"):
-##
-## arcpy.AddMessage(f"\tBathymetry")
-##
-## bathymetry_template = rf"{metadata_folder}\bathymetry_template.xml"
-## template_md = md.Metadata(bathymetry_template)
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## dataset_md.copy(template_md)
-## dataset_md.save()
-## del empty_md, template_md, bathymetry_template
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-##
-## dataset_md.synchronize("ALWAYS")
-##
-## del dataset_md
-##
-## elif dataset_name.endswith("Latitude"):
-##
-## arcpy.AddMessage(f"\tLatitude")
-##
-## latitude_template = rf"{metadata_folder}\latitude_template.xml"
-## template_md = md.Metadata(latitude_template)
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## dataset_md.copy(template_md)
-## dataset_md.save()
-## del empty_md, template_md, latitude_template
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-##
-## dataset_md.synchronize("ALWAYS")
-##
-## del dataset_md
-##
-## elif dataset_name.endswith("Longitude"):
-##
-## arcpy.AddMessage(f"\tLongitude")
-##
-## longitude_template = rf"{metadata_folder}\longitude_template.xml"
-## template_md = md.Metadata(longitude_template)
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## dataset_md.copy(template_md)
-## dataset_md.save()
-## del empty_md, template_md, longitude_template
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-##
-## dataset_md.synchronize("ALWAYS")
-##
-## del dataset_md
-##
-## elif dataset_name.endswith("Raster_Mask"):
-##
-## arcpy.AddMessage(f"\tRaster_Mask")
-##
-## raster_mask_template = rf"{metadata_folder}\raster_mask_template.xml"
-## template_md = md.Metadata(raster_mask_template)
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## dataset_md.copy(template_md)
-## dataset_md.save()
-## del empty_md, template_md, raster_mask_template
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"]
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-##
-## dataset_md.synchronize("ALWAYS")
-##
-## del dataset_md
-##
-## elif dataset_name.endswith("Mosaic"):
-##
-## arcpy.AddMessage(f"\tMosaic")
-##
-## mosaic_template = rf"{metadata_folder}\mosaic_template.xml"
-## template_md = md.Metadata(mosaic_template)
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## dataset_md.copy(template_md)
-## dataset_md.save()
-## del empty_md, template_md, mosaic_template
-##
-## # Max-Min Year range table
-## years_md = unique_years(dataset_path)
-## _tags = f", {min(years_md)} to {max(years_md)}"
-## del years_md
-##
-## dataset_md.title = metadata_dictionary[dataset_name]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name]["Tags"] + _tags
-## dataset_md.summary = metadata_dictionary[dataset_name]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name]["Access Constraints"]
-## dataset_md.save()
-##
-## dataset_md.synchronize("ALWAYS")
-##
-## del dataset_md, _tags
-##
-## elif dataset_name.endswith(".crf"):
-##
-## arcpy.AddMessage(f"\tCRF")
-## #arcpy.AddMessage(dataset_name)
-## #arcpy.AddMessage(dataset_path)
-## #dataset_path = dataset_path.replace(crfs_folder, project_gdb).replace(".crf", "_Mosaic")
-## #arcpy.AddMessage(dataset_path)
-##
-## crf_template = rf"{metadata_folder}\crf_template.xml"
-## template_md = md.Metadata(crf_template)
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## dataset_md.copy(template_md)
-## dataset_md.save()
-## del empty_md, template_md, crf_template
-##
-## # Max-Min Year range table
-## years_md = unique_years(dataset_path.replace(crfs_folder, project_gdb).replace(".crf", "_Mosaic"))
-## _tags = f", {min(years_md)} to {max(years_md)}"
-## del years_md
-##
-## dataset_md.title = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Tags"] + _tags
-## dataset_md.summary = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Summary"]
-## dataset_md.description = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Description"]
-## dataset_md.credits = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[dataset_name.replace(".crf", "_CRF")]["Access Constraints"]
-## dataset_md.save()
-##
-## dataset_md.synchronize("ALWAYS")
-##
-## del dataset_md, _tags
-##
-## else:
-## arcpy.AddMessage(f"\tRegion Table")
-##
-## if dataset_name.endswith("IDW"):
-##
-## idw_region_table_template = rf"{metadata_folder}\idw_region_table_template.xml"
-## template_md = md.Metadata(idw_region_table_template)
-##
-## dataset_md = md.Metadata(dataset_path)
-## empty_md = md.Metadata()
-## dataset_md.copy(empty_md)
-## dataset_md.save()
-## dataset_md.copy(template_md)
-## dataset_md.save()
-## del empty_md, template_md, idw_region_table_template
-##
-## # Max-Min Year range table
-## years_md = unique_years(dataset_path)
-## _tags = f", {min(years_md)} to {max(years_md)}"
-## del years_md
-##
-## dataset_md.title = metadata_dictionary[f"{dataset_name}"]["Dataset Service Title"]
-## dataset_md.tags = metadata_dictionary[f"{dataset_name}"]["Tags"] + _tags
-## dataset_md.summary = metadata_dictionary[f"{dataset_name}"]["Summary"]
-## dataset_md.description = metadata_dictionary[f"{dataset_name}"]["Description"]
-## dataset_md.credits = metadata_dictionary[f"{dataset_name}"]["Credits"]
-## dataset_md.accessConstraints = metadata_dictionary[f"{dataset_name}"]["Access Constraints"]
-## dataset_md.save()
-##
-## dataset_md.synchronize("ALWAYS")
-##
-## del dataset_md, _tags
-##
-#### elif dataset_name.endswith("GLMME"):
-####
-#### glmme_region_table_template = rf"{metadata_folder}\glmme_region_table_template.xml"
-#### template_md = md.Metadata(glmme_region_table_template)
-####
-#### dataset_md = md.Metadata(dataset_path)
-#### empty_md = md.Metadata()
-#### dataset_md.copy(empty_md)
-#### dataset_md.save()
-#### dataset_md.copy(template_md)
-#### dataset_md.save()
-#### del empty_md, template_md, glmme_region_table_template
-####
-#### # Max-Min Year range table
-#### years_md = unique_years(dataset_path)
-#### _tags = f", {min(years_md)} to {max(years_md)}"
-#### del years_md
-####
-#### dataset_md.title = metadata_dictionary[f"{dataset_name}"]["Dataset Service Title"]
-#### dataset_md.tags = metadata_dictionary[f"{dataset_name}"]["Tags"] + _tags
-#### dataset_md.summary = metadata_dictionary[f"{dataset_name}"]["Summary"]
-#### dataset_md.description = metadata_dictionary[f"{dataset_name}"]["Description"]
-#### dataset_md.credits = metadata_dictionary[f"{dataset_name}"]["Credits"]
-#### dataset_md.accessConstraints = metadata_dictionary[f"{dataset_name}"]["Access Constraints"]
-#### dataset_md.save()
-####
-#### dataset_md.synchronize("ALWAYS")
-####
-#### del dataset_md, _tags
-##
-## else:
-## pass
-## del dataset_name, dataset_path
-## del workspace
-##
-## base_project_folder = os.path.dirname(os.path.dirname(__file__))
-##
-## #parse_xml_file_format_and_saves(rf"{base_project_folder}\{project}\Current Metadata")
-##
-## del datasets
-##
-## # Declared Variables set in function
-## del project_gdb, base_project_folder, metadata_folder
-## del project_folder, scratch_folder, crfs_folder
-## del metadata_dictionary, workspaces
-##
-## # Imports
-## del dataset_title_dict, parse_xml_file_format_and_saves, unique_years
-## del md
-##
-## # Function Parameters
-## del base_project_file, project
-##
-## except KeyboardInterrupt:
-## sys.exit()
-## except arcpy.ExecuteWarning:
-## arcpy.AddWarning(arcpy.GetMessages(1))
-## except arcpy.ExecuteError:
-## arcpy.AddError(arcpy.GetMessages(2))
-## traceback.print_exc()
-## sys.exit()
-## except SystemExit:
-## arcpy.AddError(arcpy.GetMessages(2))
-## traceback.print_exc()
-## sys.exit()
-## except Exception:
-## arcpy.AddError(arcpy.GetMessages(2))
-## traceback.print_exc()
-## sys.exit()
-## except:
-## arcpy.AddError(arcpy.GetMessages(2))
-## traceback.print_exc()
-## sys.exit()
-## else:
-## # While in development, leave here. For test, move to finally
-## rk = [key for key in locals().keys() if not key.startswith('__')]
-## if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
-## return True
-## finally:
-## pass
def create_maps(project_gdb=""):
try:
# Import
from arcpy import metadata as md
- from dismap_tools import dataset_title_dict, parse_xml_file_format_and_save
+ from dismap_tools import dataset_title_dict
arcpy.env.overwriteOutput = True
arcpy.env.parallelProcessingFactor = "100%"
@@ -2387,21 +1006,21 @@ def create_maps(project_gdb=""):
arcpy.SetSeverityLevel(2)
arcpy.SetMessageLevels(['NORMAL']) # NORMAL, COMMANDSYNTAX, DIAGNOSTICS, PROJECTIONTRANSFORMATION
- # Map Cleanup
- MapCleanup = False
- if MapCleanup:
- map_cleanup(base_project_file)
- del MapCleanup
+## # Map Cleanup
+## MapCleanup = False
+## if MapCleanup:
+## map_cleanup(base_project_file)
+## del MapCleanup
- base_project_folder = rf"{os.path.dirname(base_project_file)}"
+ base_project_folder = rf"{os.path.dirname(base_project_file)}" # noqa: F821
base_project_file = rf"{base_project_folder}\DisMAP.aprx"
- project_folder = rf"{base_project_folder}\{project}"
- project_gdb = rf"{project_folder}\{project}.gdb"
+ project_folder = rf"{base_project_folder}\{project}" # noqa: F821
+ project_gdb = rf"{project_folder}\{project}.gdb" # noqa: F821
metadata_folder = rf"{project_folder}\Export Metadata"
scratch_folder = rf"{project_folder}\Scratch"
arcpy.env.workspace = project_gdb
- arcpy.env.scratchWorkspace = rf"{scratch_folder}\scratch.gdb"
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
aprx = arcpy.mp.ArcGISProject(base_project_file)
home_folder = aprx.homeFolder
@@ -2432,38 +1051,27 @@ def create_maps(project_gdb=""):
if "IDW" in dataset_name:
arcpy.AddMessage(f"Dataset Name: {dataset_name}")
if "Indicators" in dataset_name:
- arcpy.AddMessage(f"\tRegion Indicators")
+ arcpy.AddMessage("\tRegion Indicators")
elif "LayerSpeciesYearImageName" in dataset_name:
- arcpy.AddMessage(f"\tRegion Layer Species Year Image Name")
+ arcpy.AddMessage("\tRegion Layer Species Year Image Name")
else:
- arcpy.AddMessage(f"\tRegion Table")
-
-## elif "GLMME" in dataset_name:
-## arcpy.AddMessage(f"Dataset Name: {dataset_name}")
-## if "Indicators" in dataset_name:
-## arcpy.AddMessage(f"\tGLMME Region Indicators")
-##
-## elif "LayerSpeciesYearImageName" in dataset_name:
-## arcpy.AddMessage(f"\tGLMME Layer Species Year Image Name")
-##
-## else:
-## arcpy.AddMessage(f"\tGLMME Region Table")
+ arcpy.AddMessage("\tRegion Table")
else:
arcpy.AddMessage(f"Dataset Name: {dataset_name}")
if "Indicators" in dataset_name:
- arcpy.AddMessage(f"\tMain Indicators Table")
+ arcpy.AddMessage("\tMain Indicators Table")
elif "LayerSpeciesYearImageName" in dataset_name:
- arcpy.AddMessage(f"\tLayer Species Year Image Name")
+ arcpy.AddMessage("\tLayer Species Year Image Name")
elif "Datasets" in dataset_name:
- arcpy.AddMessage(f"\tDataset Table")
+ arcpy.AddMessage("\tDataset Table")
elif "Species_Filter" in dataset_name:
- arcpy.AddMessage(f"\tSpecies Filter Table")
+ arcpy.AddMessage("\tSpecies Filter Table")
else:
arcpy.AddMessage(f"\tDataset Name: {dataset_name}")
@@ -2474,53 +1082,30 @@ def create_maps(project_gdb=""):
if "IDW" in dataset_name:
arcpy.AddMessage(f"Dataset Name: {dataset_name}")
if dataset_name.endswith("Boundary"):
- arcpy.AddMessage(f"\tBoundary")
+ arcpy.AddMessage("\tBoundary")
elif dataset_name.endswith("Extent_Points"):
- arcpy.AddMessage(f"\tExtent_Points")
+ arcpy.AddMessage("\tExtent_Points")
elif dataset_name.endswith("Fishnet"):
- arcpy.AddMessage(f"\tFishnet")
+ arcpy.AddMessage("\tFishnet")
elif dataset_name.endswith("Lat_Long"):
- arcpy.AddMessage(f"\tLat_Long")
+ arcpy.AddMessage("\tLat_Long")
elif dataset_name.endswith("Region"):
- arcpy.AddMessage(f"\tRegion")
+ arcpy.AddMessage("\tRegion")
elif dataset_name.endswith("Sample_Locations"):
- arcpy.AddMessage(f"\tSample_Locations")
+ arcpy.AddMessage("\tSample_Locations")
else:
pass
-## elif "GLMME" in dataset_name:
-## arcpy.AddMessage(f"Dataset Name: {dataset_name}")
-## if dataset_name.endswith("Boundary"):
-## arcpy.AddMessage(f"\tBoundary")
-##
-## elif dataset_name.endswith("Extent_Points"):
-## arcpy.AddMessage(f"\tExtent_Points")
-##
-## elif dataset_name.endswith("Fishnet"):
-## arcpy.AddMessage(f"\tFishnet")
-##
-## elif dataset_name.endswith("Lat_Long"):
-## arcpy.AddMessage(f"\tLat_Long")
-##
-## elif dataset_name.endswith("Region"):
-## arcpy.AddMessage(f"\tRegion")
-##
-## elif dataset_name.endswith("GRID_Points"):
-## arcpy.AddMessage(f"\tGRID_Points")
-##
-## else:
-## pass
-
elif "DisMAP_Regions" == dataset_name:
arcpy.AddMessage(f"Dataset Name: {dataset_name}")
if dataset_name.endswith("Regions"):
- arcpy.AddMessage(f"\tDisMAP Regions")
+ arcpy.AddMessage("\tDisMAP Regions")
else:
arcpy.AddMessage(f"Else Dataset Name: {dataset_name}")
@@ -2530,30 +1115,16 @@ def create_maps(project_gdb=""):
if "IDW" in dataset_name:
arcpy.AddMessage(f"Dataset Name: {dataset_name}")
if dataset_name.endswith("Bathymetry"):
- arcpy.AddMessage(f"\tBathymetry")
+ arcpy.AddMessage("\tBathymetry")
elif dataset_name.endswith("Latitude"):
- arcpy.AddMessage(f"\tLatitude")
+ arcpy.AddMessage("\tLatitude")
elif dataset_name.endswith("Longitude"):
- arcpy.AddMessage(f"\tLongitude")
+ arcpy.AddMessage("\tLongitude")
elif dataset_name.endswith("Raster_Mask"):
- arcpy.AddMessage(f"\tRaster_Mask")
-
-## elif "GLMME" in dataset_name:
-## arcpy.AddMessage(f"Dataset Name: {dataset_name}")
-## if dataset_name.endswith("Bathymetry"):
-## arcpy.AddMessage(f"\tBathymetry")
-##
-## elif dataset_name.endswith("Latitude"):
-## arcpy.AddMessage(f"\tLatitude")
-##
-## elif dataset_name.endswith("Longitude"):
-## arcpy.AddMessage(f"\tLongitude")
-##
-## elif dataset_name.endswith("Raster_Mask"):
-## arcpy.AddMessage(f"\tRaster_Mask")
+ arcpy.AddMessage("\tRaster_Mask")
else:
pass
@@ -2562,17 +1133,14 @@ def create_maps(project_gdb=""):
if "IDW" in dataset_name:
arcpy.AddMessage(f"Dataset Name: {dataset_name}")
if dataset_name.endswith("Mosaic"):
- arcpy.AddMessage(f"\tMosaic")
-
-## elif "GLMME" in dataset_name:
-## arcpy.AddMessage(f"Dataset Name: {dataset_name}")
-## if dataset_name.endswith("Mosaic"):
-## arcpy.AddMessage(f"\tMosaic")
+ arcpy.AddMessage("\tMosaic")
+ else:
+ pass
elif "CRF" in dataset_name:
arcpy.AddMessage(f"Dataset Name: {dataset_name}")
if dataset_name.endswith("CRF"):
- arcpy.AddMessage(f"\tCRF")
+ arcpy.AddMessage("\tCRF")
else:
pass
@@ -2584,223 +1152,6 @@ def create_maps(project_gdb=""):
del dataset_name, dataset_path
del datasets
-## # DatasetCode, CSVFile, TransformUnit, TableName, GeographicArea, CellSize,
-## # PointFeatureType, FeatureClassName, Region, Season, DateCode, Status,
-## # DistributionProjectCode, DistributionProjectName, SummaryProduct,
-## # FilterRegion, FilterSubRegion, FeatureServiceName, FeatureServiceTitle,
-## # MosaicName, MosaicTitle, ImageServiceName, ImageServiceTitle
-##
-## # Get values for table_name from Datasets table
-## #fields = ["FeatureClassName", "FeatureServiceName", "FeatureServiceTitle"]
-## fields = ["DatasetCode", "PointFeatureType", "FeatureClassName", "Region", "Season", "DateCode", "DistributionProjectCode"]
-## datasets = [row for row in arcpy.da.SearchCursor(rf"{project_gdb}\Datasets", fields, where_clause = f"FeatureClassName IS NOT NULL AND DistributionProjectCode NOT IN ('GLMME', 'GFDL')")]
-## #datasets = [row for row in arcpy.da.SearchCursor(rf"{project_gdb}\Datasets", fields, where_clause = f"FeatureClassName IS NOT NULL and TableName = 'AI_IDW'")]
-## del fields
-##
-## for dataset in datasets:
-## dataset_code, point_feature_type, feature_class_name, region_latitude, season, date_code, distribution_project_code = dataset
-##
-## feature_service_name = f"{dataset_code}_{point_feature_type}_{date_code}".replace("None", "").replace(" ", "_").replace("__", "_")
-##
-## if distribution_project_code == "IDW":
-## feature_service_title = f"{region_latitude} {season} {point_feature_type} {date_code}".replace("None", "").replace(" ", " ")
-## elif distribution_project_code in ["GLMME", "GFDL"]:
-## feature_service_title = f"{region_latitude} {distribution_project_code} {point_feature_type} {date_code}".replace("None", "").replace(" ", " ")
-## else:
-## feature_service_title = f"{feature_service_name}".replace("_", " ")
-##
-## map_title = feature_service_title.replace("GRID Points", "").replace("Sample Locations", "").replace(" ", " ")
-##
-## feature_class_path = f"{project_gdb}\{feature_class_name}"
-##
-## arcpy.AddMessage(f"Dataset Code: {dataset_code}")
-## arcpy.AddMessage(f"\tFeature Service Name: {feature_service_name}")
-## arcpy.AddMessage(f"\tFeature Service Title: {feature_service_title}")
-## arcpy.AddMessage(f"\tMap Title: {map_title}")
-## arcpy.AddMessage(f"\tFeature Class Name: {feature_class_name}")
-## arcpy.AddMessage(f"\tFeature Class Path: {feature_class_path}")
-##
-## height = arcpy.Describe(feature_class_path).extent.YMax - arcpy.Describe(feature_class_path).extent.YMin
-## width = arcpy.Describe(feature_class_path).extent.XMax - arcpy.Describe(feature_class_path).extent.XMin
-##
-## # map_width, map_height
-## map_width, map_height = 2, 3
-## #map_width, map_height = 8.5, 11
-##
-## if height > width:
-## page_height = map_height; page_width = map_width
-## elif height < width:
-## page_height = map_width; page_width = map_height
-## else:
-## page_width = map_width; page_height = map_height
-##
-## del map_width, map_height
-## del height, width
-##
-## if map_title not in [cm.name for cm in aprx.listMaps()]:
-## arcpy.AddMessage(f"Creating Map: {map_title}")
-## aprx.createMap(f"{map_title}", "Map")
-## aprx.save()
-##
-## if map_title not in [cl.name for cl in aprx.listLayouts()]:
-## arcpy.AddMessage(f"Creating Layout: {map_title}")
-## aprx.createLayout(page_width, page_height, "INCH", f"{map_title}")
-## aprx.save()
-##
-## del feature_service_name, feature_service_title
-## del dataset_code, point_feature_type, feature_class_name, region_latitude, season
-## del date_code, distribution_project_code
-##
-## current_map = [cm for cm in aprx.listMaps() if cm.name == map_title][0]
-## arcpy.AddMessage(f"Current Map: {current_map.name}")
-##
-## feature_class_layer = arcpy.management.MakeFeatureLayer(feature_class_path, f"{map_title}")
-##
-## feature_class_layer_file = arcpy.management.SaveToLayerFile(feature_class_layer, rf"{project_folder}\Layers\{feature_class_layer}.lyrx")
-## del feature_class_layer_file
-##
-## feature_class_layer_file = arcpy.mp.LayerFile(rf"{project_folder}\Layers\{feature_class_layer}.lyrx")
-##
-## arcpy.management.Delete(feature_class_layer)
-## del feature_class_layer
-##
-## current_map.addLayer(feature_class_layer_file)
-## del feature_class_layer_file
-##
-## #aprx_basemaps = aprx.listBasemaps()
-## #basemap = 'GEBCO Basemap/Contours (NOAA NCEI Visualization)'
-## basemap = "Terrain with Labels"
-##
-## current_map.addBasemap(basemap)
-## del basemap
-##
-## #current_map_view = current_map.defaultView
-## #current_map_view.exportToPNG(rf"{project_folder}\Layers\{map_title}.png", width=200, height=133, resolution = 96, color_mode="24-BIT_TRUE_COLOR", embed_color_profile=True)
-## #del current_map_view
-##
-## # # from arcpy import metadata as md
-## # #
-## # # fc_md = md.Metadata(feature_class_path)
-## # # fc_md.thumbnailUri = rf"{project_folder}\Layers\{map_title}.png"
-## # # fc_md.save()
-## # # del fc_md
-## # # del md
-##
-## aprx.save()
-##
-## current_layout = [cl for cl in aprx.listLayouts() if cl.name == map_title][0]
-## arcpy.AddMessage(f"Current Layout: {current_layout.name}")
-##
-## current_layout.openView()
-##
-## arcpy.AddMessage(f"Create a new map frame using a point geometry")
-## #Create a new map frame using a point geometry
-## mf1 = current_layout.createMapFrame(arcpy.Point(0.01,0.01), current_map, 'New MF - Point')
-## #mf1.elementWidth = 10
-## #mf1.elementHeight = 7.5
-## mf1.elementWidth = page_width - 0.01
-## mf1.elementHeight = page_height - 0.01
-##
-## lyr = current_map.listLayers(f"{map_title}")[0]
-##
-## #Zoom to ALL selected features and export to PDF
-## arcpy.SelectLayerByAttribute_management(lyr, 'NEW_SELECTION')
-## mf1.zoomToAllLayers(True)
-## arcpy.SelectLayerByAttribute_management(lyr, 'CLEAR_SELECTION')
-##
-## #Set the map frame extent to the extent of a layer and export to PDF
-## mf1.camera.setExtent(mf1.getLayerExtent(lyr, False, True))
-## mf1.camera.scale = mf1.camera.scale * 1.1 #add a slight buffer
-##
-## del lyr
-##
-## arcpy.AddMessage(f"Create a new bookmark set to the map frame's default extent")
-## #Create a new bookmark set to the map frame's default extent
-## bkmk = mf1.createBookmark('Default Extent', "The map's default extent")
-## bkmk.updateThumbnail()
-## del mf1
-## del bkmk
-##
-## #Create point text element using a system style item
-## #txtStyleItem = aprx.listStyleItems('ArcGIS 2D', 'TEXT', 'Title (Serif)')[0]
-## #ptTxt = aprx.createTextElement(current_layout, arcpy.Point(5.5, 4.25), 'POINT', f'{map_title}', 10, style_item=txtStyleItem)
-## #del txtStyleItem
-##
-## #Change the anchor position and reposition the text to center
-## #ptTxt.setAnchor('Center_Point')
-## #ptTxt.elementPositionX = page_width / 2.0
-## #ptTxt.elementPositionY = page_height - 0.25
-## #del ptTxt
-##
-## #arcpy.AddMessage(f"Using CIM to update border")
-## #current_layout_cim = current_layout.getDefinition('V3')
-## #for elm in current_layout_cim.elements:
-## # if type(elm).__name__ == 'CIMMapFrame':
-## # if elm.graphicFrame.borderSymbol.symbol.symbolLayers:
-## # sym = elm.graphicFrame.borderSymbol.symbol.symbolLayers[0]
-## # sym.width = 5
-## # sym.color.values = [255, 0, 0, 100]
-## # else:
-## # arcpy.AddWarning(elm.name + ' has NO symbol layers')
-## #current_layout.setDefinition(current_layout_cim)
-## #del current_layout_cim, elm, sym
-##
-## ExportLayout = True
-## if ExportLayout:
-## #Export the resulting imported layout and changes to JPEG
-## arcpy.AddMessage(f"Exporting '{current_layout.name}'")
-## current_layout.exportToJPEG(rf"{project_folder}\Layouts\{current_layout.name}.jpg")
-## del ExportLayout
-##
-##
-## from arcpy import metadata as md
-##
-## fc_md = md.Metadata(feature_class_path)
-## #fc_md.thumbnailUri = rf"{project_folder}\Layers\{map_title}.png"
-## fc_md.thumbnailUri = rf"{project_folder}\Layouts\{current_layout.name}.jpg"
-## fc_md.save()
-## del fc_md
-## del md
-##
-## aprx.save()
-##
-## aprx.deleteItem(current_map); del current_map
-## aprx.deleteItem(current_layout); del current_layout
-##
-## del page_width, page_height
-## del map_title, feature_class_path
-## del dataset
-## del datasets
-##
-## # TODO: Possibly create a dictionary that can be saved to JSON
-##
-## aprx.save()
-##
-## arcpy.AddMessage(f"\nCurrent Maps & Layouts")
-##
-## current_maps = aprx.listMaps()
-## current_layouts = aprx.listLayouts()
-##
-## if current_maps:
-## arcpy.AddMessage(f"\nCurrent Maps\n")
-## for current_map in current_maps:
-## arcpy.AddMessage(f"\tProject Map: {current_map.name}")
-## del current_map
-## else:
-## arcpy.AddWarning("No maps in Project")
-##
-## if current_layouts:
-## arcpy.AddMessage(f"\nCurrent Layouts\n")
-## for current_layout in current_layouts:
-## arcpy.AddMessage(f"\tProject Layout: {current_layout.name}")
-## del current_layout
-## else:
-## arcpy.AddWarning("No layouts in Project")
-##
-## arcpy.AddMessage(f"\n{'-' * 90}\n")
-##
-## del current_layouts, current_maps
-
# Declared Variables set in function for aprx
del home_folder
# Save aprx one more time and then delete
@@ -2813,39 +1164,27 @@ def create_maps(project_gdb=""):
del metadata_dictionary
# Imports
- del dismap_tools, dataset_title_dict, md
+ del dataset_title_dict, md
# Function Parameters
del project_gdb
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
def script_tool(project_gdb=""):
try:
@@ -2855,7 +1194,7 @@ def script_tool(project_gdb=""):
start_time = time()
arcpy.AddMessage(f"{'-' * 80}")
arcpy.AddMessage(f"Python Script: {os.path.basename(__file__)}")
- arcpy.AddMessage(f"Location: ..\Documents\ArcGIS\Projects\..\{os.path.basename(os.path.dirname(__file__))}\{os.path.basename(__file__)}")
+ arcpy.AddMessage(f"Location: .. {'/'.join(__file__.split(os.sep)[-4:])}")
arcpy.AddMessage(f"Python Version: {sys.version}")
arcpy.AddMessage(f"Environment: {os.path.basename(sys.exec_prefix)}")
arcpy.AddMessage(f"Start Time: {strftime('%a %b %d %I:%M %p', localtime(start_time))}")
@@ -2867,11 +1206,11 @@ def script_tool(project_gdb=""):
del project_folder
# Create project scratch workspace, if missing
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
if not arcpy.Exists(scratch_folder):
- os.makedirs(rf"{scratch_folder}")
- if not arcpy.Exists(rf"{scratch_folder}\scratch.gdb"):
- arcpy.management.CreateFileGDB(rf"{scratch_folder}", f"scratch")
+ os.makedirs(scratch_folder)
+ if not arcpy.Exists(os.path.join(scratch_folder, "scratch.gdb")):
+ arcpy.management.CreateFileGDB(rf"{scratch_folder}", "scratch")
del scratch_folder
# Set basic arcpy.env variables
@@ -2890,7 +1229,7 @@ def script_tool(project_gdb=""):
create_feature_class_services(project_gdb=project_gdb)
del CreateFeaturClasseServices
- CreateImagesServices = False
+ CreateImagesServices = True
if CreateImagesServices:
create_image_services(project_gdb=project_gdb)
del CreateImagesServices
@@ -2939,48 +1278,47 @@ def script_tool(project_gdb=""):
del elapse_time, end_time, start_time
del gmtime, localtime, strftime, time
- except KeyboardInterrupt:
- sys.exit()
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(f"Caught an arcpy.ExecuteWarning error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddWarning(arcpy.GetMessages(1))
except arcpy.ExecuteError:
- arcpy.AddError(f"Caught an arcpy.ExecuteError error in the '{inspect.stack()[0][3]}' function.")
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- sys.exit()
- except SystemExit as se:
- arcpy.AddError(f"Caught an SystemExit error: {se} in the '{inspect.stack()[0][3]}' function.")
- sys.exit()
- except Exception as e:
- arcpy.AddError(f"Caught an Exception error: {e} in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
- except:
- arcpy.AddError(f"Caught an except error in the '{inspect.stack()[0][3]}' function.")
- traceback.print_exc()
- sys.exit()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- # While in development, leave here. For test, move to finally
- rk = [key for key in locals().keys() if not key.startswith('__')]
- if rk: arcpy.AddMessage(f"WARNING!! Remaining Keys in the '{inspect.stack()[0][3]}' function at line number {inspect.stack()[0][2]}\n\t##--> '{', '.join(rk)}' <--##"); del rk
return True
- finally:
- pass
if __name__ == '__main__':
try:
project_gdb = arcpy.GetParameterAsText(0)
+
if not project_gdb:
- project_gdb = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025\August 1 2025.gdb"
+ project_gdb = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026\\February 1 2026.gdb")
else:
pass
+
script_tool(project_gdb)
+
arcpy.SetParameterAsText(1, "Result")
+
del project_gdb
- except:
- traceback.print_exc()
- else:
- pass
- finally:
- pass
\ No newline at end of file
+
+ except arcpy.ExecuteError:
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/zip_and_unzip_csv_data.py b/ArcGIS-Analysis-Python/src/dismap_tools/zip_and_unzip_csv_data.py
index f33ca37..a63bdb9 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/zip_and_unzip_csv_data.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/zip_and_unzip_csv_data.py
@@ -5,22 +5,33 @@
- Update derived parameter values using arcpy.SetParameter() or
arcpy.SetParameterAsText()
"""
+import os
+import sys
+
import arcpy
-import traceback, os, sys
-def script_tool(home_folder, source_zip_file):
+
+def trace():
+ import sys, traceback # noqa: E401
+ tb = sys.exc_info()[2]
+ tbinfo = traceback.format_tb(tb)[0]
+ line = tbinfo.split(", ")[1]
+ filename = sys.path[0] + os.sep + "test.py"
+ synerror = traceback.format_exc().splitlines()[-1]
+ return line, filename, synerror
+
+def script_tool(project_folder, source_zip_file):
"""Script code goes below"""
try:
- import copy
from zipfile import ZipFile
from arcpy import metadata as md
from lxml import etree
- from io import StringIO, BytesIO
+ from io import StringIO
aprx = arcpy.mp.ArcGISProject("CURRENT")
#aprx.save()
- home_folder = aprx.homeFolder
- arcpy.AddMessage(home_folder)
- out_data_path = rf"{home_folder}\CSV_Data"
+ project_folder = aprx.homeFolder
+ arcpy.AddMessage(project_folder)
+ out_data_path = rf"{project_folder}\CSV_Data"
import json
json_path = rf"{out_data_path}\root_dict.json"
with open(json_path, "r") as json_file:
@@ -56,7 +67,7 @@ def script_tool(home_folder, source_zip_file):
arcpy.AddMessage(f"Adding metadata to CSV file")
tmp_workspace = arcpy.env.workspace
arcpy.env.workspace = out_data_path
- contacts = rf"{os.path.dirname(home_folder)}\Datasets\DisMAP Contacts 2025 08 01.xml"
+ contacts = rf"{os.path.dirname(project_folder)}\Datasets\DisMAP Contacts 2025 08 01.xml"
csv_files = arcpy.ListFiles("*_IDW.csv")
for csv_file in csv_files:
arcpy.AddMessage(f"\t{csv_file}")
@@ -102,10 +113,11 @@ def script_tool(home_folder, source_zip_file):
del csv_files
arcpy.env.workspace = tmp_workspace
del tmp_workspace
- del home_folder
+ del project_folder
del source_zip_file
del md
return out_data_path
+
except arcpy.ExecuteError:
arcpy.AddError(arcpy.GetMessages(2))
traceback.print_exc()
@@ -119,24 +131,31 @@ def script_tool(home_folder, source_zip_file):
del out_data_path
if __name__ == "__main__":
try:
- home_folder = arcpy.GetParameterAsText(0)
- if not home_folder:
- home_folder = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\August 1 2025"
+ project_folder = arcpy.GetParameterAsText(0)
+ if not project_folder:
+ project_folder = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\Projects\\DisMAP\\ArcGIS-Analysis-Python\\February 1 2026")
else:
pass
-
- source_zip_file = arcpy.GetParameterAsText(1)
+
+ source_zip_file = arcpy.GetParameterAsText(1)
if not source_zip_file:
- source_zip_file = rf"{os.path.expanduser('~')}\Documents\ArcGIS\Projects\DisMAP\ArcGIS-Analysis-Python\Datasets\CSV Data 2025 08 01.zip"
+ source_zip_file = os.path.join(os.path.expanduser('~'), "Documents\\ArcGIS\Projects\\DisMAP\\ArcGIS-Analysis-Python\\Initial Data\\CSV Data 20260201.zip")
else:
pass
-
- script_tool(home_folder, source_zip_file)
- arcpy.SetParameterAsText(2, "Result")
+
+ script_tool(project_folder, source_zip_file)
+
+ arcpy.SetParameterAsText(2, True)
+
except arcpy.ExecuteError:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- except:
- import traceback
- traceback.print_exc()
- arcpy.AddError(arcpy.GetMessages(2))
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools/zip_and_unzip_shapefile_data.py b/ArcGIS-Analysis-Python/src/dismap_tools/zip_and_unzip_shapefile_data.py
index 8e8b613..dba199d 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools/zip_and_unzip_shapefile_data.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools/zip_and_unzip_shapefile_data.py
@@ -6,19 +6,22 @@
arcpy.SetParameterAsText()
"""
import traceback
+
import arcpy
+
def script_tool(home_folder="", source_zip_file=""):
"""Script code goes below"""
try:
import os
- import copy
from zipfile import ZipFile
#aprx = arcpy.mp.ArcGISProject("CURRENT")
#aprx.save()
#home_folder = aprx.homeFolder
arcpy.AddMessage(home_folder)
+
out_data_path = rf"{home_folder}\Dataset_Shapefiles"
arcpy.AddMessage(out_data_path)
+
# Change Directory
os.chdir(out_data_path)
arcpy.AddMessage(f"Un-Zipping files from {os.path.basename(source_zip_file)}")
@@ -27,23 +30,29 @@ def script_tool(home_folder="", source_zip_file=""):
archive.extract(file, ".")
del file
del archive
+
arcpy.AddMessage(f"Done Un-Zipping files from {os.path.basename(source_zip_file)}")
+
del home_folder
del source_zip_file
- return out_data_path
- except arcpy.ExecuteWarning:
- arcpy.AddWarning(arcpy.GetMessages(1))
+
except arcpy.ExecuteError:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
- except:
- arcpy.AddError(arcpy.GetMessages(2))
- traceback.print_exc()
+ #Return Geoprocessing tool specific errors
+ line, filename, err = trace()
+ arcpy.AddError("Geoprocessing error on " + line + " of " + filename + " :")
+ for msg in range(0, arcpy.GetMessageCount()):
+ if arcpy.GetSeverity(msg) == 2:
+ arcpy.AddReturnMessage(msg)
+ return False
+ except: # noqa: E722
+ #Gets non-tool errors
+ line, filename, err = trace()
+ arcpy.AddError("Python error on " + line + " of " + filename)
+ arcpy.AddError(err)
+ return False
else:
- pass
- finally:
- pass
- #del out_data_path
+ return True
+
if __name__ == "__main__":
try:
home_folder = arcpy.GetParameterAsText(0)
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools_dev/arcpy da Editor.py b/ArcGIS-Analysis-Python/src/dismap_tools_dev/arcpy da Editor.py
deleted file mode 100644
index 18f366b..0000000
--- a/ArcGIS-Analysis-Python/src/dismap_tools_dev/arcpy da Editor.py
+++ /dev/null
@@ -1,45 +0,0 @@
-#-------------------------------------------------------------------------------
-# Name: module1
-# Purpose:
-#
-# Author: john.f.kennedy
-#
-# Created: 06/12/2024
-# Copyright: (c) john.f.kennedy 2024
-# Licence:
-#-------------------------------------------------------------------------------
-import arcpy
-
-def main():
- try:
- workspace = r"{os.environ['USERPROFILE']}\Documents\ArcGIS\Projects\National Mapper\National Mapper.gdb"
- edit = arcpy.da.Editor(workspace)
- arcpy.AddMessage("edit created")
- edit.startEditing()
- arcpy.AddMessage("edit started")
- edit.startOperation()
- arcpy.AddMessage("operation started")
- # Perform edits
- #with arcpy.da.InsertCursor(fc, fields) as fc_icursor:
- # fc_icursor.insertRow(someNewRow)
- edit.stopOperation()
- arcpy.AddMessage("operation stopped")
- edit.stopEditing(True) ## Stop the edit session with True to save the changes
- arcpy.AddMessage("edit stopped")
- except Exception as err:
- arcpy.AddMessage(err)
- if 'edit' in locals():
- if edit.isEditing:
- edit.stopOperation()
- arcpy.AddMessage("operation stopped in except")
- edit.stopEditing(False) ## Stop the edit session with False to abandon the changes
- arcpy.AddMessage("edit stopped in except")
- except:
- import traceback
- traceback.print_exc()
- finally:
- # Cleanup
- arcpy.management.ClearWorkspaceCache()
-
-if __name__ == '__main__':
- main()
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools_dev/dev_dismap_metadata_processing.py b/ArcGIS-Analysis-Python/src/dismap_tools_dev/dev_dismap_metadata_processing.py
index 154026c..0c6645b 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools_dev/dev_dismap_metadata_processing.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools_dev/dev_dismap_metadata_processing.py
@@ -117,7 +117,7 @@ def dataset_title_dict(project_gdb=""):
__datasets_dict = {}
- dataset_codes = {row[0] : [row[1], row[2], row[3], row[4], row[5]] for row in arcpy.da.SearchCursor(rf"{project_gdb}\Datasets", ["DatasetCode", "PointFeatureType", "DistributionProjectCode", "FilterRegion", "FilterSubRegion", "Season"])}
+ dataset_codes = {row[0] : [row[1], row[2], row[3], row[4], row[5]] for row in arcpy.da.SearchCursor(os.path.join(project_gdb, "Datasets"), ["DatasetCode", "PointFeatureType", "DistributionProjectCode", "FilterRegion", "FilterSubRegion", "Season"])}
for dataset_code in dataset_codes:
point_feature_type = dataset_codes[dataset_code][0] if dataset_codes[dataset_code][0] else ""
distribution_project_code = dataset_codes[dataset_code][1] if dataset_codes[dataset_code][1] else ""
@@ -810,7 +810,7 @@ def import_basic_template_xml(dataset_path=""):
scratch_folder = rf"{project_folder}\Scratch"
arcpy.env.workspace = project_gdb
- arcpy.env.scratchWorkspace = rf"{scratch_folder}\scratch.gdb"
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
import json
json_path = rf"{project_folder}\root_dict.json"
@@ -1138,7 +1138,7 @@ def update_eainfo_xml_elements(dataset_path=""):
scratch_folder = rf"{project_folder}\Scratch"
arcpy.env.workspace = project_gdb
- arcpy.env.scratchWorkspace = rf"{scratch_folder}\scratch.gdb"
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
import json
json_path = rf"{project_folder}\root_dict.json"
@@ -1330,7 +1330,7 @@ def insert_missing_elements(dataset_path):
del json
arcpy.env.workspace = project_gdb
- arcpy.env.scratchWorkspace = rf"{scratch_folder}\scratch.gdb"
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
del scratch_folder
del project_folder
del project_gdb
@@ -3041,7 +3041,7 @@ def basic_metadata_report(dataset_path=""):
scratch_folder = rf"{project_folder}\Scratch"
arcpy.env.workspace = project_gdb
- arcpy.env.scratchWorkspace = rf"{scratch_folder}\scratch.gdb"
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
del scratch_folder
del project_folder
del project_gdb
@@ -4167,7 +4167,7 @@ def main(project_gdb=""):
#print(project_folder)
arcpy.env.workspace = project_gdb
- arcpy.env.scratchWorkspace = rf"{scratch_folder}\scratch.gdb"
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
#metadata_dictionary = dataset_title_dict(project_gdb)
#for key in metadata_dictionary:
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools_dev/dev_export_arcgis_metadata.py b/ArcGIS-Analysis-Python/src/dismap_tools_dev/dev_export_arcgis_metadata.py
index 86a4184..0c17a0d 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools_dev/dev_export_arcgis_metadata.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools_dev/dev_export_arcgis_metadata.py
@@ -37,7 +37,7 @@ def export_metadata(project_gdb="", metadata_workspace=""):
# Define variables
project_folder = os.path.dirname(project_gdb)
scratch_folder = rf"{project_folder}\Scratch"
- scratch_gdb = rf"{scratch_folder}\scratch.gdb"
+ scratch_gdb = os.path.join(scratch_folder, "scratch.gdb")
# Set the workspace environment to local file geodatabase
arcpy.env.workspace = project_gdb
diff --git a/ArcGIS-Analysis-Python/src/dismap_tools_dev/dismap_metadata_processing.py b/ArcGIS-Analysis-Python/src/dismap_tools_dev/dismap_metadata_processing.py
index cb3cc2a..7d51345 100644
--- a/ArcGIS-Analysis-Python/src/dismap_tools_dev/dismap_metadata_processing.py
+++ b/ArcGIS-Analysis-Python/src/dismap_tools_dev/dismap_metadata_processing.py
@@ -53,7 +53,7 @@ def create_basic_template_xml_files(base_project_file="", project=""):
for workspace in workspaces:
arcpy.env.workspace = workspace
- arcpy.env.scratchWorkspace = rf"{scratch_folder}\scratch.gdb"
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
datasets = list()
@@ -692,7 +692,7 @@ def import_basic_template_xml_files(base_project_file="", project=""):
for workspace in workspaces:
arcpy.env.workspace = workspace
- arcpy.env.scratchWorkspace = rf"{scratch_folder}\scratch.gdb"
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
datasets = list()
@@ -1639,7 +1639,7 @@ def create_thumbnails(base_project_file="", project=""):
scratch_folder = rf"{project_folder}\Scratch"
arcpy.env.workspace = project_gdb
- arcpy.env.scratchWorkspace = rf"{scratch_folder}\scratch.gdb"
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
aprx = arcpy.mp.ArcGISProject(base_project_file)
home_folder = aprx.homeFolder
@@ -1649,7 +1649,7 @@ def create_thumbnails(base_project_file="", project=""):
for workspace in workspaces:
arcpy.env.workspace = workspace
- arcpy.env.scratchWorkspace = rf"{scratch_folder}\scratch.gdb"
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
datasets = list()
@@ -2002,9 +2002,9 @@ def export_to_inport_xml_files(base_project_file="", project=""):
scratch_folder = rf"{project_folder}\Scratch"
arcpy.env.workspace = project_gdb
- arcpy.env.scratchWorkspace = rf"{scratch_folder}\scratch.gdb"
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
- datasets = [rf"{project_gdb}\Species_Filter", rf"{project_gdb}\Indicators", rf"{project_gdb}\DisMAP_Regions", rf"{project_gdb}\GMEX_IDW_Sample_Locations", rf"{project_gdb}\GMEX_IDW_Mosaic", rf"{crfs_folder}\GMEX_IDW.crf"]
+ datasets = [rf"{project_gdb}\Species_Filter", rf"{project_gdb}\Indicators", os.path.join(project_gdb, "DisMAP_Regions"), rf"{project_gdb}\GMEX_IDW_Sample_Locations", rf"{project_gdb}\GMEX_IDW_Mosaic", rf"{crfs_folder}\GMEX_IDW.crf"]
for dataset_path in sorted(datasets):
print(dataset_path)
@@ -2092,7 +2092,7 @@ def create_maps(base_project_file="", project="", dataset=""):
scratch_folder = rf"{project_folder}\Scratch"
arcpy.env.workspace = project_gdb
- arcpy.env.scratchWorkspace = rf"{scratch_folder}\scratch.gdb"
+ arcpy.env.scratchWorkspace = os.path.join(scratch_folder, "scratch.gdb")
aprx = arcpy.mp.ArcGISProject(base_project_file)
@@ -2477,7 +2477,7 @@ def main(project=""):
CreateMaps = False
if CreateMaps:
- result = create_maps(base_project_file, project, dataset=rf"{project_gdb}\DisMAP_Regions")
+ result = create_maps(base_project_file, project, dataset=os.path.join(project_gdb, "DisMAP_Regions"))
results.extend(result); del result
del CreateMaps
diff --git a/README.md b/README.md
index 126c271..068162f 100644
--- a/README.md
+++ b/README.md
@@ -1,4 +1,4 @@
-# [The Distribution Mapping and Analysis Portal (DisMAP)](https://github.com/nmfs-fish-tools/DisMAP)
+# [The Distribution Mapping and Analysis Portal (DisMAP)](https://github.com/nmfs-fish-tools/DisMAP)
> This code is always in development. Find code used for various reports in the code [releases](https://github.com/nmfs-fish-tools/DisMAP/releases).
@@ -7,14 +7,37 @@ The NOAA Fisheries Distribution Mapping and Analysis Portal (DisMAP) provides ea
* Center of biomass
* Range limits
-This repository provides the data processing and analysis code used to develop the spatial distribution and indicators presented in the portal. For more information and to launch the portal visit: https://apps-st.fisheries.noaa.gov/dismap/index.html.
+This repository provides the data processing and analysis code used to develop the spatial distribution and indicators presented in the portal. For more information and to launch the portal visit: https://apps-st.fisheries.noaa.gov/dismap/index.html.
Explanation of Folders:
1. data_processing_rcode
This folder holds all the R scripts needed to download and process the regional bottom trawl survey data. Opening up the DisMAP_Project Rproject file will open all necessary Rscripts to run the analysis and set up the appropriate directory structure. You will need to follow the instructions in each of the "download_x.R" scripts for each to download or obtain from a regional POC the raw survey data. Once the data is downloaded and in the "data" folder, you may run the Compile_Dismap_Current.R script to process and clean the data. After running Compile_Dismap_Current.R, run the create_data_for_map_generation.R to get the data in the needed file format for use in the Python script and generte the interpolated biomass and indicators (as described below)
2. ArcGIS Analysis - Python
-This folder houses the scripts for generating the interpolated biomass and calculating the distribution indicators (latitude, depth, range limits, etc).
+This folder houses the scripts for generating the interpolated biomass and calculating the distribution indicators (latitude, depth, range limits, etc).
+
+## Utility Scripts
+
+This repository includes utility scripts to help with data management and preparation.
+
+### `rename_spaces_to_hyphens.py`
+
+A utility script is provided to rename files and folders by replacing spaces with hyphens. This can be useful for ensuring file path compatibility with certain tools or shell environments that may not handle spaces well.
+
+- **Location:** `ArcGIS-Analysis-Python/src/dismap_tools/rename_spaces_to_hyphens.py`
+- **Purpose:** Recursively finds and renames files and directories under a specified path, replacing spaces in their names with hyphens (`-`).
+
+**Usage**
+
+Run the script from your command line, providing the path to the directory you want to process. To see which files and folders would be renamed without actually performing the rename operation, use the `--dry-run` flag. This is recommended before running the script for the first time.
+
+```bash
+# Perform a dry run to preview changes
+python ArcGIS-Analysis-Python/src/dismap_tools/rename_spaces_to_hyphens.py --dry-run "C:\path\to\your\data folder"
+
+# Execute the renaming process
+python ArcGIS-Analysis-Python/src/dismap_tools/rename_spaces_to_hyphens.py "C:\path\to\your\data folder"
+```
## Suggestions and Comments
@@ -56,4 +79,3 @@ works of the Software outside of the United States.
[U.S. Department of Commerce](https://www.commerce.gov/) \| [National
Oceanographic and Atmospheric Administration](https://www.noaa.gov) \|
[NOAA Fisheries](https://www.fisheries.noaa.gov/)
-