1e41f4b71Sopenharmony_ci# Using MindSpore Lite for Model Inference (C/C++)
2e41f4b71Sopenharmony_ci
3e41f4b71Sopenharmony_ci## When to Use
4e41f4b71Sopenharmony_ci
5e41f4b71Sopenharmony_ciMindSpore Lite is an AI engine that provides AI model inference for different hardware devices. It has been used in a wide range of fields, such as image classification, target recognition, facial recognition, and character recognition.
6e41f4b71Sopenharmony_ci
7e41f4b71Sopenharmony_ciThis document describes the general development process for MindSpore Lite model inference.
8e41f4b71Sopenharmony_ci
9e41f4b71Sopenharmony_ci## Basic Concepts
10e41f4b71Sopenharmony_ci
11e41f4b71Sopenharmony_ciBefore getting started, you need to understand the following basic concepts:
12e41f4b71Sopenharmony_ci
13e41f4b71Sopenharmony_ci**Tensor**: a special data structure that is similar to arrays and matrices. It is basic data structure used in MindSpore Lite network operations.
14e41f4b71Sopenharmony_ci
15e41f4b71Sopenharmony_ci**Float16 inference mode**: an inference mode in half-precision format, where a number is represented with 16 bits.
16e41f4b71Sopenharmony_ci
17e41f4b71Sopenharmony_ci
18e41f4b71Sopenharmony_ci
19e41f4b71Sopenharmony_ci## Available APIs
20e41f4b71Sopenharmony_ci
21e41f4b71Sopenharmony_ciAPIs involved in MindSpore Lite model inference are categorized into context APIs, model APIs, and tensor APIs.
22e41f4b71Sopenharmony_ci
23e41f4b71Sopenharmony_ci### Context APIs
24e41f4b71Sopenharmony_ci
25e41f4b71Sopenharmony_ci| API       | Description       |
26e41f4b71Sopenharmony_ci| ------------------ | ----------------- |
27e41f4b71Sopenharmony_ci|OH_AI_ContextHandle OH_AI_ContextCreate()|Creates a context object.|
28e41f4b71Sopenharmony_ci|void OH_AI_ContextSetThreadNum(OH_AI_ContextHandle context, int32_t thread_num)|Sets the number of runtime threads.|
29e41f4b71Sopenharmony_ci| void OH_AI_ContextSetThreadAffinityMode(OH_AI_ContextHandle context, int mode)|Sets the affinity mode for binding runtime threads to CPU cores, which are classified into large, medium, and small cores based on the CPU frequency. You only need to bind the large or medium cores, but not small cores.
30e41f4b71Sopenharmony_ci|OH_AI_DeviceInfoHandle OH_AI_DeviceInfoCreate(OH_AI_DeviceType device_type)|Creates a runtime device information object.|
31e41f4b71Sopenharmony_ci|void OH_AI_ContextDestroy(OH_AI_ContextHandle *context)|Destroys a context object.|
32e41f4b71Sopenharmony_ci|void OH_AI_DeviceInfoSetEnableFP16(OH_AI_DeviceInfoHandle device_info, bool is_fp16)|Sets whether to enable float16 inference. This function is available only for CPU and GPU devices.|
33e41f4b71Sopenharmony_ci|void OH_AI_ContextAddDeviceInfo(OH_AI_ContextHandle context, OH_AI_DeviceInfoHandle device_info)|Adds a runtime device information object.|
34e41f4b71Sopenharmony_ci
35e41f4b71Sopenharmony_ci### Model APIs
36e41f4b71Sopenharmony_ci
37e41f4b71Sopenharmony_ci| API       | Description       |
38e41f4b71Sopenharmony_ci| ------------------ | ----------------- |
39e41f4b71Sopenharmony_ci|OH_AI_ModelHandle OH_AI_ModelCreate()|Creates a model object.|
40e41f4b71Sopenharmony_ci|OH_AI_Status OH_AI_ModelBuildFromFile(OH_AI_ModelHandle model, const char *model_path,OH_AI_ModelType odel_type, const OH_AI_ContextHandle model_context)|Loads and builds a MindSpore model from a model file.|
41e41f4b71Sopenharmony_ci|void OH_AI_ModelDestroy(OH_AI_ModelHandle *model)|Destroys a model object.|
42e41f4b71Sopenharmony_ci
43e41f4b71Sopenharmony_ci### Tensor APIs
44e41f4b71Sopenharmony_ci
45e41f4b71Sopenharmony_ci| API       | Description       |
46e41f4b71Sopenharmony_ci| ------------------ | ----------------- |
47e41f4b71Sopenharmony_ci|OH_AI_TensorHandleArray OH_AI_ModelGetInputs(const OH_AI_ModelHandle model)|Obtains the input tensor array structure of a model.|
48e41f4b71Sopenharmony_ci|int64_t OH_AI_TensorGetElementNum(const OH_AI_TensorHandle tensor)|Obtains the number of tensor elements.|
49e41f4b71Sopenharmony_ci|const char *OH_AI_TensorGetName(const OH_AI_TensorHandle tensor)|Obtains the name of a tensor.|
50e41f4b71Sopenharmony_ci|OH_AI_DataType OH_AI_TensorGetDataType(const OH_AI_TensorHandle tensor)|Obtains the tensor data type.|
51e41f4b71Sopenharmony_ci|void *OH_AI_TensorGetMutableData(const OH_AI_TensorHandle tensor)|Obtains the pointer to mutable tensor data.|
52e41f4b71Sopenharmony_ci
53e41f4b71Sopenharmony_ci## How to Develop
54e41f4b71Sopenharmony_ci
55e41f4b71Sopenharmony_ciThe following figure shows the development process for MindSpore Lite model inference.
56e41f4b71Sopenharmony_ci
57e41f4b71Sopenharmony_ci**Figure 1** Development process for MindSpore Lite model inference
58e41f4b71Sopenharmony_ci
59e41f4b71Sopenharmony_ci![how-to-use-mindspore-lite](figures/01.png)
60e41f4b71Sopenharmony_ci
61e41f4b71Sopenharmony_ciBefore moving to the development process, you need to reference related header files and compile functions to generate random input. The sample code is as follows:
62e41f4b71Sopenharmony_ci
63e41f4b71Sopenharmony_ci```c
64e41f4b71Sopenharmony_ci#include <stdlib.h>
65e41f4b71Sopenharmony_ci#include <stdio.h>
66e41f4b71Sopenharmony_ci#include "mindspore/model.h"
67e41f4b71Sopenharmony_ci
68e41f4b71Sopenharmony_ci// Generate random input.
69e41f4b71Sopenharmony_ciint GenerateInputDataWithRandom(OH_AI_TensorHandleArray inputs) {
70e41f4b71Sopenharmony_ci  for (size_t i = 0; i < inputs.handle_num; ++i) {
71e41f4b71Sopenharmony_ci    float *input_data = (float *)OH_AI_TensorGetMutableData(inputs.handle_list[i]);
72e41f4b71Sopenharmony_ci    if (input_data == NULL) {
73e41f4b71Sopenharmony_ci      printf("MSTensorGetMutableData failed.\n");
74e41f4b71Sopenharmony_ci      return OH_AI_STATUS_LITE_ERROR;
75e41f4b71Sopenharmony_ci    }
76e41f4b71Sopenharmony_ci    int64_t num = OH_AI_TensorGetElementNum(inputs.handle_list[i]);
77e41f4b71Sopenharmony_ci    const int divisor = 10;
78e41f4b71Sopenharmony_ci    for (size_t j = 0; j < num; j++) {
79e41f4b71Sopenharmony_ci      input_data[j] = (float)(rand() % divisor) / divisor;  // 0--0.9f
80e41f4b71Sopenharmony_ci    }
81e41f4b71Sopenharmony_ci  }
82e41f4b71Sopenharmony_ci  return OH_AI_STATUS_SUCCESS;
83e41f4b71Sopenharmony_ci}
84e41f4b71Sopenharmony_ci```
85e41f4b71Sopenharmony_ci
86e41f4b71Sopenharmony_ciThe development process consists of the following main steps:
87e41f4b71Sopenharmony_ci
88e41f4b71Sopenharmony_ci1. Prepare the required model.
89e41f4b71Sopenharmony_ci
90e41f4b71Sopenharmony_ci    The required model can be downloaded directly or obtained using the model conversion tool.
91e41f4b71Sopenharmony_ci  
92e41f4b71Sopenharmony_ci     - If the downloaded model is in the `.ms` format, you can use it directly for inference. The following uses the **mobilenetv2.ms** model as an example.
93e41f4b71Sopenharmony_ci     - If the downloaded model uses a third-party framework, such as TensorFlow, TensorFlow Lite, Caffe, or ONNX, you can use the [model conversion tool](https://www.mindspore.cn/lite/docs/en/master/use/downloads.html#1-8-1) to convert it to the `.ms` format.
94e41f4b71Sopenharmony_ci
95e41f4b71Sopenharmony_ci2. Create a context, and set parameters such as the number of runtime threads and device type.
96e41f4b71Sopenharmony_ci
97e41f4b71Sopenharmony_ci    The following describes two typical scenarios:
98e41f4b71Sopenharmony_ci
99e41f4b71Sopenharmony_ci    Scenario 1: Only the CPU inference context is created.
100e41f4b71Sopenharmony_ci  
101e41f4b71Sopenharmony_ci    ```c
102e41f4b71Sopenharmony_ci    // Create a context, and set the number of runtime threads to 2 and the thread affinity mode to 1 (big cores first).
103e41f4b71Sopenharmony_ci    OH_AI_ContextHandle context = OH_AI_ContextCreate();
104e41f4b71Sopenharmony_ci    if (context == NULL) {
105e41f4b71Sopenharmony_ci      printf("OH_AI_ContextCreate failed.\n");
106e41f4b71Sopenharmony_ci      return OH_AI_STATUS_LITE_ERROR;
107e41f4b71Sopenharmony_ci    }
108e41f4b71Sopenharmony_ci    const int thread_num = 2;
109e41f4b71Sopenharmony_ci    OH_AI_ContextSetThreadNum(context, thread_num);
110e41f4b71Sopenharmony_ci    OH_AI_ContextSetThreadAffinityMode(context, 1);
111e41f4b71Sopenharmony_ci    // Set the device type to CPU, and disable Float16 inference.
112e41f4b71Sopenharmony_ci    OH_AI_DeviceInfoHandle cpu_device_info = OH_AI_DeviceInfoCreate(OH_AI_DEVICETYPE_CPU);
113e41f4b71Sopenharmony_ci    if (cpu_device_info == NULL) {
114e41f4b71Sopenharmony_ci      printf("OH_AI_DeviceInfoCreate failed.\n");
115e41f4b71Sopenharmony_ci      OH_AI_ContextDestroy(&context);
116e41f4b71Sopenharmony_ci      return OH_AI_STATUS_LITE_ERROR;
117e41f4b71Sopenharmony_ci    }
118e41f4b71Sopenharmony_ci    OH_AI_DeviceInfoSetEnableFP16(cpu_device_info, false);
119e41f4b71Sopenharmony_ci    OH_AI_ContextAddDeviceInfo(context, cpu_device_info);
120e41f4b71Sopenharmony_ci    ```
121e41f4b71Sopenharmony_ci
122e41f4b71Sopenharmony_ci    Scenario 2: The neural network runtime (NNRT) and CPU heterogeneous inference contexts are created.
123e41f4b71Sopenharmony_ci
124e41f4b71Sopenharmony_ci    NNRT is the runtime for cross-chip inference computing in the AI field. Generally, the acceleration hardware connected to NNRT, such as the NPU, has strong inference capabilities but supports only a limited number of operators, whereas the general-purpose CPU has weak inference capabilities but supports a wide range of operators. MindSpore Lite supports NNRT and CPU heterogeneous inference. Model operators are preferentially scheduled to NNRT for inference. If certain operators are not supported by NNRT, then they are scheduled to the CPU for inference. The following is the sample code for configuring NNRT/CPU heterogeneous inference:
125e41f4b71Sopenharmony_ci   <!--Del-->
126e41f4b71Sopenharmony_ci   > **NOTE**
127e41f4b71Sopenharmony_ci   >
128e41f4b71Sopenharmony_ci   > NNRT/CPU heterogeneous inference requires access of NNRT hardware. For details, see [OpenHarmony/ai_neural_network_runtime](https://gitee.com/openharmony/ai_neural_network_runtime).
129e41f4b71Sopenharmony_ci   <!--DelEnd-->
130e41f4b71Sopenharmony_ci    ```c
131e41f4b71Sopenharmony_ci    // Create a context, and set the number of runtime threads to 2 and the thread affinity mode to 1 (big cores first).
132e41f4b71Sopenharmony_ci    OH_AI_ContextHandle context = OH_AI_ContextCreate();
133e41f4b71Sopenharmony_ci    if (context == NULL) {
134e41f4b71Sopenharmony_ci      printf("OH_AI_ContextCreate failed.\n");
135e41f4b71Sopenharmony_ci      return OH_AI_STATUS_LITE_ERROR;
136e41f4b71Sopenharmony_ci    }
137e41f4b71Sopenharmony_ci    // Preferentially use NNRT inference.
138e41f4b71Sopenharmony_ci    // Use the NNRT hardware of the first ACCELERATORS class to create the NNRT device information and configure the high-performance inference mode for the NNRT hardware. You can also use OH_AI_GetAllNNRTDeviceDescs() to obtain the list of NNRT devices in the current environment, search for a specific device by device name or type, and use the device as the NNRT inference hardware.
139e41f4b71Sopenharmony_ci    OH_AI_DeviceInfoHandle nnrt_device_info = OH_AI_CreateNNRTDeviceInfoByType(OH_AI_NNRTDEVICE_ACCELERATOR);
140e41f4b71Sopenharmony_ci    if (nnrt_device_info == NULL) {
141e41f4b71Sopenharmony_ci      printf("OH_AI_DeviceInfoCreate failed.\n");
142e41f4b71Sopenharmony_ci      OH_AI_ContextDestroy(&context);
143e41f4b71Sopenharmony_ci      return OH_AI_STATUS_LITE_ERROR;
144e41f4b71Sopenharmony_ci    }
145e41f4b71Sopenharmony_ci    OH_AI_DeviceInfoSetPerformanceMode(nnrt_device_info, OH_AI_PERFORMANCE_HIGH);
146e41f4b71Sopenharmony_ci    OH_AI_ContextAddDeviceInfo(context, nnrt_device_info);
147e41f4b71Sopenharmony_ci
148e41f4b71Sopenharmony_ci    // Configure CPU inference.
149e41f4b71Sopenharmony_ci    OH_AI_DeviceInfoHandle cpu_device_info = OH_AI_DeviceInfoCreate(OH_AI_DEVICETYPE_CPU);
150e41f4b71Sopenharmony_ci    if (cpu_device_info == NULL) {
151e41f4b71Sopenharmony_ci      printf("OH_AI_DeviceInfoCreate failed.\n");
152e41f4b71Sopenharmony_ci      OH_AI_ContextDestroy(&context);
153e41f4b71Sopenharmony_ci      return OH_AI_STATUS_LITE_ERROR;
154e41f4b71Sopenharmony_ci    }
155e41f4b71Sopenharmony_ci    OH_AI_ContextAddDeviceInfo(context, cpu_device_info);
156e41f4b71Sopenharmony_ci    ```
157e41f4b71Sopenharmony_ci
158e41f4b71Sopenharmony_ci    
159e41f4b71Sopenharmony_ci
160e41f4b71Sopenharmony_ci3. Create, load, and build the model.
161e41f4b71Sopenharmony_ci
162e41f4b71Sopenharmony_ci    Call **OH_AI_ModelBuildFromFile** to load and build the model.
163e41f4b71Sopenharmony_ci
164e41f4b71Sopenharmony_ci    In this example, the **argv[1]** parameter passed to **OH_AI_ModelBuildFromFile** indicates the specified model file path.
165e41f4b71Sopenharmony_ci
166e41f4b71Sopenharmony_ci    ```c
167e41f4b71Sopenharmony_ci    // Create a model.
168e41f4b71Sopenharmony_ci    OH_AI_ModelHandle model = OH_AI_ModelCreate();
169e41f4b71Sopenharmony_ci    if (model == NULL) {
170e41f4b71Sopenharmony_ci      printf("OH_AI_ModelCreate failed.\n");
171e41f4b71Sopenharmony_ci      OH_AI_ContextDestroy(&context);
172e41f4b71Sopenharmony_ci      return OH_AI_STATUS_LITE_ERROR;
173e41f4b71Sopenharmony_ci    }
174e41f4b71Sopenharmony_ci
175e41f4b71Sopenharmony_ci    // Load and build the inference model. The model type is OH_AI_MODELTYPE_MINDIR.
176e41f4b71Sopenharmony_ci    int ret = OH_AI_ModelBuildFromFile(model, argv[1], OH_AI_MODELTYPE_MINDIR, context);
177e41f4b71Sopenharmony_ci    if (ret != OH_AI_STATUS_SUCCESS) {
178e41f4b71Sopenharmony_ci      printf("OH_AI_ModelBuildFromFile failed, ret: %d.\n", ret);
179e41f4b71Sopenharmony_ci      OH_AI_ModelDestroy(&model);
180e41f4b71Sopenharmony_ci      return ret;
181e41f4b71Sopenharmony_ci    }
182e41f4b71Sopenharmony_ci    ```
183e41f4b71Sopenharmony_ci
184e41f4b71Sopenharmony_ci4. Input data.
185e41f4b71Sopenharmony_ci 
186e41f4b71Sopenharmony_ci    Before executing model inference, you need to populate data to the input tensor. In this example, random data is used to populate the model.
187e41f4b71Sopenharmony_ci
188e41f4b71Sopenharmony_ci    ```c
189e41f4b71Sopenharmony_ci    // Obtain the input tensor.
190e41f4b71Sopenharmony_ci    OH_AI_TensorHandleArray inputs = OH_AI_ModelGetInputs(model);
191e41f4b71Sopenharmony_ci    if (inputs.handle_list == NULL) {
192e41f4b71Sopenharmony_ci      printf("OH_AI_ModelGetInputs failed, ret: %d.\n", ret);
193e41f4b71Sopenharmony_ci      OH_AI_ModelDestroy(&model);
194e41f4b71Sopenharmony_ci      return ret;
195e41f4b71Sopenharmony_ci    }
196e41f4b71Sopenharmony_ci    // Use random data to populate the tensor.
197e41f4b71Sopenharmony_ci    ret = GenerateInputDataWithRandom(inputs);
198e41f4b71Sopenharmony_ci    if (ret != OH_AI_STATUS_SUCCESS) {
199e41f4b71Sopenharmony_ci      printf("GenerateInputDataWithRandom failed, ret: %d.\n", ret);
200e41f4b71Sopenharmony_ci      OH_AI_ModelDestroy(&model);
201e41f4b71Sopenharmony_ci      return ret;
202e41f4b71Sopenharmony_ci    }
203e41f4b71Sopenharmony_ci   ```
204e41f4b71Sopenharmony_ci
205e41f4b71Sopenharmony_ci5. Execute model inference.
206e41f4b71Sopenharmony_ci
207e41f4b71Sopenharmony_ci    Call **OH_AI_ModelPredict** to perform model inference.
208e41f4b71Sopenharmony_ci
209e41f4b71Sopenharmony_ci    ```c
210e41f4b71Sopenharmony_ci    // Execute model inference.
211e41f4b71Sopenharmony_ci    OH_AI_TensorHandleArray outputs;
212e41f4b71Sopenharmony_ci    ret = OH_AI_ModelPredict(model, inputs, &outputs, NULL, NULL);
213e41f4b71Sopenharmony_ci    if (ret != OH_AI_STATUS_SUCCESS) {
214e41f4b71Sopenharmony_ci      printf("OH_AI_ModelPredict failed, ret: %d.\n", ret);
215e41f4b71Sopenharmony_ci      OH_AI_ModelDestroy(&model);
216e41f4b71Sopenharmony_ci      return ret;
217e41f4b71Sopenharmony_ci    }
218e41f4b71Sopenharmony_ci    ```
219e41f4b71Sopenharmony_ci
220e41f4b71Sopenharmony_ci6. Obtain the output.
221e41f4b71Sopenharmony_ci
222e41f4b71Sopenharmony_ci    After model inference is complete, you can obtain the inference result through the output tensor.
223e41f4b71Sopenharmony_ci
224e41f4b71Sopenharmony_ci    ```c
225e41f4b71Sopenharmony_ci    // Obtain the output tensor and print the information.
226e41f4b71Sopenharmony_ci    for (size_t i = 0; i < outputs.handle_num; ++i) {
227e41f4b71Sopenharmony_ci      OH_AI_TensorHandle tensor = outputs.handle_list[i];
228e41f4b71Sopenharmony_ci      int64_t element_num = OH_AI_TensorGetElementNum(tensor);
229e41f4b71Sopenharmony_ci      printf("Tensor name: %s, tensor size is %zu ,elements num: %lld.\n", OH_AI_TensorGetName(tensor),
230e41f4b71Sopenharmony_ci            OH_AI_TensorGetDataSize(tensor), element_num);
231e41f4b71Sopenharmony_ci      const float *data = (const float *)OH_AI_TensorGetData(tensor);
232e41f4b71Sopenharmony_ci      printf("output data is:\n");
233e41f4b71Sopenharmony_ci      const int max_print_num = 50;
234e41f4b71Sopenharmony_ci      for (int j = 0; j < element_num && j <= max_print_num; ++j) {
235e41f4b71Sopenharmony_ci        printf("%f ", data[j]);
236e41f4b71Sopenharmony_ci      }
237e41f4b71Sopenharmony_ci      printf("\n");
238e41f4b71Sopenharmony_ci    }
239e41f4b71Sopenharmony_ci    ```
240e41f4b71Sopenharmony_ci
241e41f4b71Sopenharmony_ci7. Destroy the model.
242e41f4b71Sopenharmony_ci
243e41f4b71Sopenharmony_ci    If the MindSpore Lite inference framework is no longer needed, you need to destroy the created model.
244e41f4b71Sopenharmony_ci
245e41f4b71Sopenharmony_ci    ```c
246e41f4b71Sopenharmony_ci    // Destroy the model.
247e41f4b71Sopenharmony_ci    OH_AI_ModelDestroy(&model);
248e41f4b71Sopenharmony_ci    ```
249e41f4b71Sopenharmony_ci
250e41f4b71Sopenharmony_ci## Verification
251e41f4b71Sopenharmony_ci
252e41f4b71Sopenharmony_ci1. Write **CMakeLists.txt**.
253e41f4b71Sopenharmony_ci
254e41f4b71Sopenharmony_ci    ```cmake
255e41f4b71Sopenharmony_ci    cmake_minimum_required(VERSION 3.14)
256e41f4b71Sopenharmony_ci    project(Demo)
257e41f4b71Sopenharmony_ci
258e41f4b71Sopenharmony_ci    add_executable(demo main.c)
259e41f4b71Sopenharmony_ci
260e41f4b71Sopenharmony_ci    target_link_libraries(
261e41f4b71Sopenharmony_ci            demo
262e41f4b71Sopenharmony_ci            mindspore_lite_ndk
263e41f4b71Sopenharmony_ci            pthread
264e41f4b71Sopenharmony_ci            dl
265e41f4b71Sopenharmony_ci    )
266e41f4b71Sopenharmony_ci    ```
267e41f4b71Sopenharmony_ci   - To use ohos-sdk for cross compilation, you need to set the native toolchain path for the CMake tool as follows: `-DCMAKE_TOOLCHAIN_FILE="/xxx/native/build/cmake/ohos.toolchain.cmake"`.
268e41f4b71Sopenharmony_ci    
269e41f4b71Sopenharmony_ci   - The toolchain builds a 64-bit application by default. To build a 32-bit application, add the following configuration: `-DOHOS_ARCH="armeabi-v7a"`.
270e41f4b71Sopenharmony_ci
271e41f4b71Sopenharmony_ci2. Run the CMake tool.
272e41f4b71Sopenharmony_ci
273e41f4b71Sopenharmony_ci    - Use hdc_std to connect to the device and put **demo** and **mobilenetv2.ms** to the same directory on the device.
274e41f4b71Sopenharmony_ci    - Run the hdc_std shell command to access the device, go to the directory where **demo** is located, and run the following command:
275e41f4b71Sopenharmony_ci
276e41f4b71Sopenharmony_ci    ```shell
277e41f4b71Sopenharmony_ci    ./demo mobilenetv2.ms
278e41f4b71Sopenharmony_ci    ```
279e41f4b71Sopenharmony_ci
280e41f4b71Sopenharmony_ci    The inference is successful if the output is similar to the following:
281e41f4b71Sopenharmony_ci
282e41f4b71Sopenharmony_ci    ```shell
283e41f4b71Sopenharmony_ci    # ./QuickStart ./mobilenetv2.ms                                            
284e41f4b71Sopenharmony_ci    Tensor name: Softmax-65, tensor size is 4004 ,elements num: 1001.
285e41f4b71Sopenharmony_ci    output data is:
286e41f4b71Sopenharmony_ci    0.000018 0.000012 0.000026 0.000194 0.000156 0.001501 0.000240 0.000825 0.000016 0.000006 0.000007 0.000004 0.000004 0.000004 0.000015 0.000099 0.000011 0.000013 0.000005 0.000023 0.000004 0.000008 0.000003 0.000003 0.000008 0.000014 0.000012 0.000006 0.000019 0.000006 0.000018 0.000024 0.000010 0.000002 0.000028 0.000372 0.000010 0.000017 0.000008 0.000004 0.000007 0.000010 0.000007 0.000012 0.000005 0.000015 0.000007 0.000040 0.000004 0.000085 0.000023 
287e41f4b71Sopenharmony_ci    ```
288