Release notes

1.6.0

Main changes

  • ElcoreNN DSP library optimizations.

  • Support Tanh and InstanceNormalization ONNX operators.

ElcoreNN DSP

  • Increase performance of following layers: Concat, Reshape, Split.

1.5.0

Main changes

  • New ElcoreNN CPU API functions.

  • ElcoreNN DSP library optimizations.

  • Update onnx-elcorenn to v1.3.3.

  • Support new ONNX operators and Keras layers.

  • elcorenn-examples: Installation path has been changed.

ElcoreNN CPU API

  • InvokeModel:
    • Supported different types of model input data.

    • Added multiple outputs support for input dmabuf (for models with one input).

      dmabuf_input_user_pointer_output.cpp
      #include "elcorenn/elcorenn.h"
      
      int main() {
        // Init ElcoreNN resources
        InitBackend();
        // Load model from files
        auto id = LoadModel("path-to-model-json-file", "path-to-weights-bin-file");
      
        // Read output layers shapes
        int outputs_number = GetOutputsNumber(id);
        std::vector<std::vector<int>> outputs_shapes;
        for (int i = 0; i < outputs_number; i++){
          uint32_t shape_arr[MAX_NDIM_VALUE + 1];
          GetOutputShape(id, i, shape_arr);
          std::vector<int> shape(shape_arr[0]);
          for (int j = 0; j < shape_arr[0]; j++) {
            shape[j] = shape_arr[j + 1];
          }
          outputs_shapes.push_back(shape);
        }
      
        std::vector<int> buffers;
        uint32_t batch_size;
        // Create and fill or use already existing dmabufs to push it back to
        // buffers vector. Number of dmabufs in buffers should be equal to
        // batch_size.
        // ...
        // batch_size is a batch dimension value in input_data, output_data
      
        // Allocate memory for outputs data
        std::vector<std::vector<float>> outputs_data;
        for (int i = 0; i < outputs_number; i++){
          auto shape = outputs_shapes[i];
          int size = 1;
          for (auto dim_size: shape){
            size = size * dim_size;
          }
          std::vector<float> output_data(size * batch_size);
          outputs_data.push_back(output_data);
        }
      
        // Prediction
        float* outputs[outputs_number];
        for (int j = 0; j < outputs_number; j++) {
          outputs[j] = outputs_data[j].data();
        }
        InvokeModel(id, buffers.data(), outputs, batch_size);
      }
      
    • Added support for input dmabuf and output dmabuf (for models with one input and one output).

      dmabuf_input_dmabuf_output.cpp
      #include "elcorenn/elcorenn.h"
      
      int main() {
        // Init ElcoreNN resources
        InitBackend();
        // Load model from files
        auto id = LoadModel("path-to-model-json-file", "path-to-weights-bin-file");
      
        std::vector<int> input_buffers;
        std::vector<int> output_buffers;
        uint32_t batch_size;
        // Create or use already existing dmabufs to push it back to buffers
        // vector. Number of dmabufs in buffers should be equal to batch_size.
        // ...
        // batch_size is a batch dimension value in input_data, output_data
      
        // Prediction
        InvokeModel(id, input_buffers.data(), output_buffers.data(), batch_size);
      }
      
  • InitBackend: Add argument dsp_heap to set dsp heap size.

  • LoadModel: Add argument optimization to set data type of computation.

ElcoreNN DSP

  • Add fp16 support for Pad layer.

  • Increase performance of sigmoid by using precalculated table values.

  • Increase performance by fusing layers:

    digraph R141MergeDigraph1 {
  node [ shape = box, style = filled, width=0.5, height=0.2]
  A [label = "X"];
  B [label = "Sigmoid", fillcolor = "#f5c884"];
  C [label = "Mul", fillcolor = "#fcffc4"];
  D [label = "Y"];

  A -> B -> C -> D;
  A -> C;
}

    into:

    digraph R141MergeDigraph2 {
  node [ shape = box, style = filled, width=0.5, height=0.2]
  A [label = "X"];
  B [label = "X * Sigmoid(X)", fillcolor = "#98fab2"];
  C [label = "Y"];

  A -> B -> C;
}

    X * Sigmoid(X) function uses precalculated table values.

onnx-elcorenn

  • Update to v1.3.3 version.

  • Save converter version to model json file.

  • Support python3.8.

  • Support Dropout and Cast layers.

  • Bug fixes.

ONNX Operators

  • AveragePool: 2D, dilations=1.

  • Clip.

  • Conv: support dimension=3 for fp32.

  • ConvTranspose: support dimension=1.

  • LSTM: reference version (fp32 only).

  • MatMul.

Keras layers

  • Conv1DTranspose: reference version (fp32).

  • Conv3D: reference version (fp32).

  • LSTM: reference version (fp32).

elcorenn-examples

  • Installation path has been changed to /usr/bin.

  • prepare-examples-env.sh has been renamed to elcorenn-examples-get-data.sh.

Examples look for models relative to $HOME directory:

  1. Go into $HOME directory.

  2. Run elcorenn-examples-get-data.sh script to get models.

1.4.0

Main changes

  • Add new layers support.

  • Add YOLOv5 example.

  • Update onnx-elcorenn converter to 1.3.0 version.

  • Update ElcoreNN CPU API.

  • Create web storage of converters and elcorenn-examples-data releases.

ElcoreNN CPU

  • Support for models with multiple inputs and outputs.

  • Support dma_buf as input of model.

  • Add functions to get name and shapes of model inputs or outputs.

ElcoreNN DSP

  • Add Div, Erf, Conv1D, Sqrt layers.

elcorenn-examples

  • Add new example of YOLOv5s usage.

  • Update archive with video and models.

onnx-elcorenn

  • Update onnx-elcorenn to 1.3.0 version.

  • Support models converted from Tensorflow to ONNX.

  • Support attribute sizes for Resize operator. It represents target size of the output tensor.

Compatibility: elcorenn == v1.4.0