112e714ceSopenharmony_ci# Neural Network Runtime
212e714ceSopenharmony_ci
312e714ceSopenharmony_ci## Introduction
412e714ceSopenharmony_ci
512e714ceSopenharmony_ciNeural Network Runtime (NNRt) functions as a bridge to connect the upper-layer AI inference framework and bottom-layer acceleration chip, implementing cross-chip inference computing of AI models.
612e714ceSopenharmony_ci
712e714ceSopenharmony_ciAs shown in Figure 1, NNRt opens Native APIs for the AI inference framework to access. Currently, NNRt interconnects with the built-in [MindSpore Lite](https://gitee.com/openharmony/third_party_mindspore) inference framework of the system. In addition, NNRt opens HDI APIs for device-side AI acceleration chips (such as NPUs and DSPs) to access the OpenHarmony hardware ecosystem. AI applications can directly use underlying chips to accelerate inference and computing through the AI inference framework and NNRt.
812e714ceSopenharmony_ci
912e714ceSopenharmony_ciNNRt and MindSpore Lite use MindIR unified intermediate representation to reduce unnecessary model conversion in the intermediate process, making model transfer more efficient.
1012e714ceSopenharmony_ci
1112e714ceSopenharmony_ciGenerally, the AI application, AI inference engine, and NNRt are in the same process, and the chip driver runs in another process. The transmission of models and computing data between the two processes should be implemented by IPC. NNRt architecture implements the HDI client based on the HDI APIs. Accordingly, chip vendors need to implement and open the HDI services through HDI APIs.
1212e714ceSopenharmony_ci
1312e714ceSopenharmony_ci**Figure 1** NNRt architecture
1412e714ceSopenharmony_ci!["NNRt architecture"](./figures/neural_network_runtime.png)
1512e714ceSopenharmony_ci
1612e714ceSopenharmony_ci## Directory Structure
1712e714ceSopenharmony_ci
1812e714ceSopenharmony_ci```text
1912e714ceSopenharmony_ci/foundation/ai/neural_network_runtime
2012e714ceSopenharmony_ci├── common                         # Common functions
2112e714ceSopenharmony_ci├── figures                        # Images referenced by README
2212e714ceSopenharmony_ci├── example                        # Development samples
2312e714ceSopenharmony_ci│   ├── deep_learning_framework    # Application/Inference framework development samples
2412e714ceSopenharmony_ci│   └── drivers                    # Device driver development samples
2512e714ceSopenharmony_ci├── frameworks
2612e714ceSopenharmony_ci│   └── native                     # Framework code
2712e714ceSopenharmony_ci│       └── ops                    # Operator header files and implementation
2812e714ceSopenharmony_ci├── interfaces                     # APIs
2912e714ceSopenharmony_ci│   ├── innerkits                  # Internal APIs
3012e714ceSopenharmony_ci│   └── kits                       # External APIs
3112e714ceSopenharmony_ci└── test                           # Test cases
3212e714ceSopenharmony_ci    ├── system_test                # System test cases
3312e714ceSopenharmony_ci    └── unittest                   #  Unit test cases
3412e714ceSopenharmony_ci```
3512e714ceSopenharmony_ci
3612e714ceSopenharmony_ci## Compilation and Building
3712e714ceSopenharmony_ci
3812e714ceSopenharmony_ciIn the root directory of the OpenHarmony source code, call the following command to compile NNRt separately:
3912e714ceSopenharmony_ci```shell
4012e714ceSopenharmony_ci./build.sh --product-name rk3568 --ccache --build-target neural_network_runtime --jobs 4
4112e714ceSopenharmony_ci```
4212e714ceSopenharmony_ci> **Note:** 
4312e714ceSopenharmony_ci--product-name: product name, for example, <b>Hi3516DV300</b> and <b>rk3568</b>.
4412e714ceSopenharmony_ci--ccache: The cache function is used during compilation.
4512e714ceSopenharmony_ci--build-target: name of the compiled component.
4612e714ceSopenharmony_ci--jobs: number of compilation threads, which can accelerate compilation.
4712e714ceSopenharmony_ci
4812e714ceSopenharmony_ci## Description
4912e714ceSopenharmony_ci
5012e714ceSopenharmony_ci### API Description
5112e714ceSopenharmony_ci
5212e714ceSopenharmony_ci- [Native API reference](https://gitee.com/openharmony/docs/tree/master/zh-cn/application-dev/reference/apis-neural-network-runtime-kit)
5312e714ceSopenharmony_ci- [HDI API reference](https://gitee.com/openharmony/drivers_interface/tree/master/nnrt)
5412e714ceSopenharmony_ci
5512e714ceSopenharmony_ci### How to Use
5612e714ceSopenharmony_ci
5712e714ceSopenharmony_ci- For details about AI inference engine/application development, see Neural Network Runtime App Development Guide.
5812e714ceSopenharmony_ci- For details about how to develop AI acceleration chip drivers and devices, see Neural Network Runtime Device Development Guide.
5912e714ceSopenharmony_ci
6012e714ceSopenharmony_ci## Repositories Involved
6112e714ceSopenharmony_ci
6212e714ceSopenharmony_ci- [ai_neural_network_runtime](https://gitee.com/openharmony/ai_neural_network_runtime)
6312e714ceSopenharmony_ci- [third_party_mindspore](https://gitee.com/openharmony/third_party_mindspore)
64