1e41f4b71Sopenharmony_ci# Introduction to MindSpore Lite Kit
2e41f4b71Sopenharmony_ci
3e41f4b71Sopenharmony_ci## Use Cases
4e41f4b71Sopenharmony_ci
5e41f4b71Sopenharmony_ciMindSpore Lite is a lightweight AI engine built in OpenHarmony. Its open AI framework comes with a multi-processor architecture to empower intelligent applications in all scenarios. It brings data scientists, algorithm engineers, and developers with friendly development, efficient running, and flexible deployment, helping to build a prosperous open source ecosystem of AI hardware/software applications.
6e41f4b71Sopenharmony_ci
7e41f4b71Sopenharmony_ciSo far, MindSpore Lite has been widely used in applications such as image classification, target recognition, facial recognition, and character recognition. Typical use cases are as follows:
8e41f4b71Sopenharmony_ci
9e41f4b71Sopenharmony_ci- Image classification: determines the category to which an image (such as an image of a cat, a dog, an airplane, or a car) belongs. This is the most basic computer vision application and belongs to the supervised learning category.
10e41f4b71Sopenharmony_ci- Target recognition: uses the preset object detection model to identify objects in the input frames of a camera, add labels to the objects, and mark them with bounding boxes.
11e41f4b71Sopenharmony_ci- Image segmentation: detects the positions of objects in a graph or the object of a specific pixel in the graph.
12e41f4b71Sopenharmony_ci
13e41f4b71Sopenharmony_ci## Advantages
14e41f4b71Sopenharmony_ci
15e41f4b71Sopenharmony_ciMindSpore Lite provides AI model inference capabilities for hardware devices and end-to-end solutions for developers to empower intelligent applications in all scenarios. Its advantages include:
16e41f4b71Sopenharmony_ci
17e41f4b71Sopenharmony_ci- High performance: Efficient kernel algorithms and assembly-level optimization support high-performance inference on dedicated CPU and NNRt chips, maximizing computing power while minimizing inference latency and power consumption.
18e41f4b71Sopenharmony_ci- Lightweight: Provides an ultra-lightweight solution, and supports model quantization and compression to enable smaller models to run faster and empower AI model deployment in extreme environments.
19e41f4b71Sopenharmony_ci- All-scenario support: Supports different types of OS and embedded system to adapt to AI applications on various intelligent devices.
20e41f4b71Sopenharmony_ci- Efficient deployment: Supports MindSpore, TensorFlow Lite, Caffe, and ONNX models, provides capabilities such as model compression and data processing, and supports unified training and inference IR.
21e41f4b71Sopenharmony_ci
22e41f4b71Sopenharmony_ci## Development Process
23e41f4b71Sopenharmony_ci
24e41f4b71Sopenharmony_ci**Figure 1** Development process for MindSpore Lite model inference
25e41f4b71Sopenharmony_ci![mindspore workflow](figures/mindspore_workflow.png)
26e41f4b71Sopenharmony_ci
27e41f4b71Sopenharmony_ciThe MindSpore Lite development process consists of two phases:
28e41f4b71Sopenharmony_ci
29e41f4b71Sopenharmony_ci- Model conversion
30e41f4b71Sopenharmony_ci
31e41f4b71Sopenharmony_ci  MindSpore Lite uses models in `.ms` format for inference. You can use the model conversion tool provided by MindSpore Lite to convert third-party framework models, such as TensorFlow, TensorFlow Lite, Caffe, and ONNX, into `.ms` models. For details, see [Using MindSpore Lite for Model Conversion](./mindspore-lite-converter-guidelines.md).
32e41f4b71Sopenharmony_ci
33e41f4b71Sopenharmony_ci- Model deployment
34e41f4b71Sopenharmony_ci
35e41f4b71Sopenharmony_ci  You can call the MindSpore Lite runtime APIs to implement model inference or training. The procedure is as follows:
36e41f4b71Sopenharmony_ci
37e41f4b71Sopenharmony_ci    1. Create the inference or training context, including the hardware and the number of threads.
38e41f4b71Sopenharmony_ci    2. Load the `.ms` model file.
39e41f4b71Sopenharmony_ci    3. Set the model input data.
40e41f4b71Sopenharmony_ci    4. Perform inference or training and read the output.
41e41f4b71Sopenharmony_ci
42e41f4b71Sopenharmony_ci## Development Mode
43e41f4b71Sopenharmony_ci
44e41f4b71Sopenharmony_ciMindSpore Lite is built in the OpenHarmony standard system as a system component. You can develop AI applications based on MindSpore Lite in the following ways:
45e41f4b71Sopenharmony_ci
46e41f4b71Sopenharmony_ci- Method 1: [Using MindSpore Lite for Image Classification (ArkTS)](./mindspore-guidelines-based-js.md). You can directly call MindSpore Lite ArkTS APIs in the UI code to load the AI model and perform model inference. An advantage of this method is the quick verification of the inference effect.
47e41f4b71Sopenharmony_ci- Method 2: [Using MindSpore Lite native APIs to develop AI applications](./mindspore-guidelines-based-native.md). You can encapsulate the algorithm models and the code for calling MindSpore Lite native APIs into a dynamic library, and then use N-API to encapsulate the dynamic library into ArkTS APIs for the UI to call.
48e41f4b71Sopenharmony_ci
49e41f4b71Sopenharmony_ci## Relationship with Other Kits
50e41f4b71Sopenharmony_ci
51e41f4b71Sopenharmony_ciNeural Network Runtime (NNRt) functions as a bridge to connect the upper-layer AI inference framework and underlying acceleration chips, implementing cross-chip inference computing of AI models.
52e41f4b71Sopenharmony_ci
53e41f4b71Sopenharmony_ciMindSpore Lite natively allows you to configure NNRt for AI-dedicated chips (such as NPUs) to accelerate inference. Therefore, you can configure MindSpore Lite to use the NNRt hardware. The focus of this topic is about how to develop AI applications using MindSpore Lite. For details about how to use NNRt, see [Connecting the Neural Network Runtime to an AI Inference Framework](../nnrt/neural-network-runtime-guidelines.md).
54