Skip to content

Hypsometric Relationship


Warning

This library is under development, none of the presented solutions are available for download.

Estimate the heights of the missing trees based on the heights measured in the field.

Class Parameters

HypRel(x, y, df, model, iterator)
Parameters Description
x The name of the column that contains the tree diameters/circumferences.
y The name of the column that contains the tree heights.
df The DataFrame containing the tree data.
model (Optional) A list of models used for estimating tree heights. If none, will use all models avaliable.
iterator (Optional) A column name string. Defines wich column will be used as a iterator. Could be a farm name, plot name, code or any unique identification tag.

Class Functions

functions and parameters
  HypRel.run()  
  HypRel.view_metrics()  
  HypRel.plots(dir = None, show = None) #(1) 
  HypRel.get_coef()
  HypRel.predict()

  1. dir = The directory you want to save your plots! If dir == None, then the plots will be displayed.
    show = Display the plots on the screen! It can be True or False.
Parameters Description
.run() Fit the models
.view_metrics() Return a table of metrics of each evaluated model
.plots(dir=None, show=True) Return the height and residuals plots
.get_coef() Return the coefficients for each model
.predict() Return the predict heights and used models in new columns

Example Usage

hyp_rel_example.py
1
2
from fptools.hyp_rel import HypRel #(1)
import pandas as pd #(2)

  1. Import HypRel class.
  2. Import pandas for data manipulation.

Create a variable for the HypRel Class

hyp_rel_example.py
3
4
5
6
7
8
9
df = pd.read_csv(r'C:/Your/path/csv_inventory_file.csv') #(1)
reg = HypRel('CAP',"HT",df) #(2)
results = reg.run() #(3) 
metrics = reg.view_metrics() #(4) 
reg.plots(r'C:/Your/path/to_save',show=True) #(5) 
df_coefficients =  reg.get_coef() #(6) 
final_results =  reg.predict() #(7) 

  1. Load your csv file.
  2. Create the variable reg containing the HypRel class.
  3. Run the models and save in the results variable.
  4. Evaluate the fitted models and save the metrics in the metrics variable.
  5. Generate the plots for the fitted models.
  6. Retrieve the coefficients for each fitted model.
  7. Obtain the final heights and the models used for estimation.
flowchart LR subgraph run runText1[Run all the available models] end subgraph metrics runText2[Evaluate each fitted model] end subgraph plots runText3[Generate plots] end subgraph coefficients runText4[Return coefficients] end subgraph predict runText5[Return the estimated heights and used functions] end %% Links para os subgráficos: HypRel-Module --> run HypRel-Module --> metrics HypRel-Module --> plots HypRel-Module --> coefficients HypRel-Module --> predict

Available models

  • curtis
  • \[ \operatorname{Total height} =e^{(\beta_0+β1*\frac{1}{x})} \]

  • parabolic
  • \[ \operatorname{Total height} = \beta_0 + \beta_1 * x + \beta_2 * x^2 \]

  • stoffels
  • \[ \operatorname{Total height} = e^{(\beta_0+\beta_1*\ln(x))} \]

  • henriksen
  • \[ \operatorname{Total height} = \beta_0 + \beta_1 * \ln(x) \]

  • prodan_i
  • \[ \operatorname{Total height} = (\frac{x^2}{\beta_0+\beta_1*x+\beta_2* x^2}) \]

  • prodan_ii
  • \[ \operatorname{Total height} =(\frac{x^2}{\beta_0+\beta_1*x+\beta_2* x^2})+1.3 \]

  • smd_fm
  • Adaptation of the "Forest Mensuration" julia package by SILVA (2022), used to perform regressions using different types of transformations of diameter at breast height and height in hypsometric relationship processes.

    Transformations of Y

    • \( y \)
    • \( \log(y) \)
    • \( \log(y - 1.3) \)
    • \( \log(1 + y) \)
    • \( \frac{1}{y} \)
    • \( \frac{1}{y - 1.3} \)
    • \( \frac{1}{\sqrt{y}} \)
    • \( \frac{1}{\sqrt{y - 1.3}} \)
    • \( \frac{x}{\sqrt{y}} \)
    • \( \frac{x}{\sqrt{y - 1.3}} \)
    • \( \frac{x^2}{y} \)
    • \( \frac{x^2}{y - 1.3} \)

    Transformations of X

    • \( x \)
    • \( x^2 \)
    • \( \log(x) \)
    • \( \log(x)^2 \)
    • \( \frac{1}{x} \)
    • \( \frac{1}{x^2} \)
    • \( x + x^2 \)
    • \( x + \log(x) \)
    • \( x + \log(x)^2 \)
    • \( x + \frac{1}{x} \)
    • \( x + \frac{1}{x^2} \)
    • \( x^2 + \log(x) \)
    • \( x^2 + \log(x)^2 \)
    • \( x^2 + \frac{1}{x} \)
    • \( \log(x) + \log(x)^2 \)
    • \( \log(x) + \frac{1}{x} \)
    • \( \log(x) + \frac{1}{x^2} \)
    • \( \log(x)^2 + \frac{1}{x} \)
    • \( \log(x)^2 + \frac{1}{x^2} \)
    • \( \frac{1}{x} + \frac{1}{x^2} \)

  • ann (1)
    1. Explanation about ANN below.

    Artificial Neural Network

    When selecting the 'ann' model, 4 different structures of artificial neural networks will be tested. Only the result from 1 model will be returned. The model returned will be selected by the ranking function.
    For the 'ann' model, the module sklearn.neural_network.MLPRegressor is used.

    --- title: ANN parameters --- classDiagram class MLPRegressor { Epochs: 3000 Activation: logistic Solver Mode: lbfgs Batch size: dynamic Larning rate init: 0.1 Learning rate mode: adaptive } class Model-0 { Hidden layer sizes: (4,5) } class Model_1 { Hidden layer sizes: (4,2) } class Model_2 { Hidden layer sizes: (3,2) } class Model_3 { Hidden layer sizes: (4,4) } MLPRegressor <|-- Model-0 MLPRegressor <|-- Model_1 MLPRegressor <|-- Model_2 MLPRegressor <|-- Model_3

    Ranking function

    To select the best-performing models and rank them accordingly, the following metrics are obtained:

    Métric name Structure
    Mean Absolute Error (MAE) \( MAE = \frac{1}{n} \sum_{i=1}^{n} \|y_i - \hat{y}_i\| \)
    Mean Absolute Percentage Error (MAPE) \( MAPE = \frac{100}{n} \sum_{i=1}^{n} \left\|\frac{y_i - \hat{y}_i}{y_i}\right\| \)
    Mean Squared Error (MSE) \( MSE = \frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2 \)
    Root Mean Squared Error (RMSE) \( RMSE = \sqrt{\frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2} \)
    R Squared (Coefficient of Determination) \( R^2 = 1 - \frac{\sum_{i=1}^{n} (y_i - \hat{y}_i)^2}{\sum_{i=1}^{n} (y_i - \bar{y})^2} \)
    Explained Variance (EV) \( EV = 1 - \frac{Var(y - \hat{y})}{Var(y)} \)
    Mean Error \( Mean\ Error = \frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i) \)

    After obtaining the metrics for each tested model, the best model receives a score of 10, while the others receive scores of 9, 8, and so on.