Getting started

This document describes the components of the ElcoreNN SDK.

ElcoreNN SDK is a set of tools for running neural network models on Elcore50 cores.

It consists of:

  • ElcoreNN (ElcoreNN DSP and ElcoreNN CPU pre-compiled libraries).

  • ElcoreNN CPU API headers.

  • Converters to convert models from ONNX and Keras to internal model format.

  • Examples of ElcoreNN CPU usage.

  • Converted models, video and images used by example applications.

ElcoreNN libraries and examples are packaged in elcorenn and elcorenn-examples respectively.

elcorenn and elcorenn-examples are available as MCom-03 Buildroot packages and MCom-03 ALT Linux RPM packages.

Overview

ElcoreNN is a neural network inference engine accelerated for Elcore50 (DSP). It consists of ElcoreNN DSP library and ElcoreNN CPU library.

ElcoreNN DSP library is a highly optimized library of fp16 neural network inference for Elcore50.

ElcoreNN CPU library allows to run NN application on Linux. It linked to ElcoreNN DSP library via ElcoreCL (OpenCL like library for Elcore50):

digraph OverviewDigraph {
  node [ shape = box, style = filled, width=2.5, height=0.4]
  A [label = "C++ NN Application"];
  B [label = "ElcoreNN CPU library", fillcolor = azure2];
  C [label = "ElcoreCL"];
  D [label = "Elcore50 driver"];
  E [label = "ElcoreNN DSP library", fillcolor = azure2];
  F [label = "Elcore50"];
  A -> B -> C -> D -> E -> F;
}

ElcoreNN CPU library provides a simple C++ API for loading and running neural networks models. ElcoreNN library uses ElcoreNN model format. You can convert your model from ONNX or Keras format to ElcoreNN model format using converters (see Converters).

Quick start

elcorenn and elcorenn-examples are installed into MCom-03 Buildroot by default.

To install RPM packages in MCom-03 ALT Linux run:

apt-get install elcorenn elcorenn-examples

elcorenn-examples contains several examples of using ElcoreNN CPU API.

There are instructions to run examples:

  1. From Linux terminal on device go to $HOME folder.

  2. Run script elcorenn-examples-get-data.sh to download models and examples data.

  3. Examples run several neural network models: YOLOV8n, YOLOv5s, MobileNet, ResNet50 and full-connected. Models have been converted from Keras to ElcoreNN format. YOLOV8n, YOLOv5s model has been converted from ONNX. Go to mcom03-defconfig-src/buildroot/dl/elcorenn-examples/ to see details.

Following examples are available:

  • sample-full-connective is a simple example of neural network that accepts input vector [1,32] multiplies it by 1 and returns output vector [1,32].

    To run example use command:

    sample-full-connective
    

    Expected result:

    Input: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
    Output: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
    
  • sample-resnet is an example of objects classification by ResNet50 model.

    Input data: Image file (JPG, PNG).

    Output data: The result of the objects classification printing to stdout. A string contains a label of Top-1 class prediction from Imagenet dataset and the probability of the prediction.

    To run example use command:

    sample-resnet $HOME/elcorenn-examples-data/media/bubble.jpg
    

    You can use your image as input. Copy image on device and specify the path to image like above.

  • demo-classification is a demo application of objects classification from video file using MobileNet model (is not available for ALT Linux).

    The application uses GStreamer and VPU to decode the video stream and display prediction result on a monitor connected to device via HDMI. It also shows a prediction time for a single Elcore50 core.

    Input data: Video file (format .h264).

    Output data: Result of MobileNet prediction displayed on monitor.

    To run example use command:

    demo-classification -i $HOME/elcorenn-examples-data/media/animals.h264 \
                        -o hdmi@mcom03-cfg
    
  • sample-yolo is an example of objects detection by YOLOv5 or YOLOv8 model, pretrained on coco classes dataset.

    Input data: Image file (JPG, PNG).

    Output data: Image file with bounding boxes of detected objects.

    To run example use command:

    sample-yolo yolov5s -i $HOME/elcorenn-examples-data/media/bicycle.jpg
    

    for YOLOv5 model. Or use command:

    sample-yolo yolov8n -i $HOME/elcorenn-examples-data/media/bicycle.jpg
    

    for YOLOv8 model. You can use your image as input. Copy image on device and specify the path to image like above.

How to run a model

First you need to prepare your model and compile your program. These steps are performed on host machine.

Preparation (host)

  1. Train the model using Keras or Pytorch or use pretrained model.

  2. Save the model as Keras SaveModel format or ONNX format.

  3. Convert model to ElcoreNN model format via Converters.

Compilation (host)

  1. Write you C++ application using ElcoreNN API.

    main.cpp
    #include "elcorenn/elcorenn.h"
    
    int main() {
      // Init ElcoreNN backend
      auto backend_id = InitBackend();
      // Load model from files
      auto model_id = LoadModel("path-to-model-json-file", "path-to-weights-bin-file",
                                ENNDataType::FLOAT16, backend_id);
      float* input_data;
      float* output_data;
      uint32_t batch_size;
      // Allocate input_data and output_data and fill input_data and batch_size
      // ...
      // batch_size is a batch dimension value in input_data, output_data
    
      // Prediction
      float* inputs[1] = {input_data};
      float* outputs[1] = {output_data};
      InvokeModel(model_id, inputs, outputs, batch_size);
    
      // Release memory
      ReleaseModel(model_id);
      ReleaseBackend(backend_id);
    }
    
  2. Link ElcoreNN library.

    Create CMakeLists.txt file.

    project(my-app)
    cmake_minimum_required(VERSION 3.0)
    find_package(elcorenn REQUIRED)
    add_executable(my-app main.cpp)
    target_link_libraries(my-app elcorenn)
    
  3. Cross-compile program using aarch64-buildroot-linux-gnu_sdk-buildroot provided with MCom-03 Linux SDK.

    Unpack aarch64-buildroot-linux-gnu_sdk-buildroot.tar.gz archive and go into folder.

    Run commands to relocate sdk path and setup environment:

    ./relocate-sdk.sh
    source environment-setup
    

    Go back to folder with your code and run commands:

    mkdir build && cd build
    cmake .. -DCMAKE_BUILD_TYPE=Release -G "Unix Makefiles"
    make
    

Running model (mcom03)

  1. Open terminal on host machine connected to device:

    # connect host machine to board using UART and run
    # run command on host machine
    minicom -D /dev/ttyUSBX
    # where X - number of USB device
    
  2. Run Linux on device.

  3. Copy executable file to device:

    scp ./my-app root@[device-ip-addr]:/root
    # use ifconfig from minicom terminal to view ip-addr
    
  4. Execute application:

    # from device terminal run
    cd /root
    ./my-app