1# Using MindSpore Lite for Image Classification (C/C++)
2
3## When to Use
4
5You can use [MindSpore](../../reference/apis-mindspore-lite-kit/_mind_spore.md) to quickly deploy AI algorithms into your application to perform AI model inference for image classification.
6
7Image classification can be used to recognize objects in images and is widely used in medical image analysis, auto driving, e-commerce, and facial recognition.
8
9## Basic Concepts
10
11- N-API: a set of native APIs used to build ArkTS components. N-APIs can be used to encapsulate C/C++ libraries into ArkTS modules.
12
13## Development Process
14
151. Select an image classification model.
162. Use the MindSpore Lite inference model on the device to classify the selected images.
17
18## Environment Preparation
19
20Install DevEco Studio 4.1 or later, and update the SDK to API version 11 or later.
21
22## How to Develop
23
24The following uses inference on an image in the album as an example to describe how to use MindSpore Lite to implement image classification.
25
26### Selecting a Model
27
28This sample application uses [mobilenetv2.ms](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/1.5/mobilenetv2.ms) as the image classification model. The model file is available in the **entry/src/main/resources/rawfile** project directory.
29
30If you have other pre-trained models for image classification, convert the original model into the .ms format by referring to [Using MindSpore Lite for Model Conversion](mindspore-lite-converter-guidelines.md).
31
32### Writing Code
33
34#### Image Input and Preprocessing
35
361. Call [@ohos.file.picker](../../reference/apis-core-file-kit/js-apis-file-picker.md) to pick up the desired image in the album.
37
38   ```ts
39   import { photoAccessHelper } from '@kit.MediaLibraryKit';
40   import { BusinessError } from '@kit.BasicServicesKit';
41   
42   let uris: Array<string> = [];
43   
44   // Create an image picker instance.
45   let photoSelectOptions = new photoAccessHelper.PhotoSelectOptions();
46   
47   // Set the media file type to IMAGE and set the maximum number of media files that can be selected.
48   photoSelectOptions.MIMEType = photoAccessHelper.PhotoViewMIMETypes.IMAGE_TYPE;
49   photoSelectOptions.maxSelectNumber = 1;
50   
51   // Create an album picker instance and call select() to open the album page for file selection. After file selection is done, the result set is returned through photoSelectResult.
52   let photoPicker = new photoAccessHelper.PhotoViewPicker();
53   photoPicker.select(photoSelectOptions, async (
54     err: BusinessError, photoSelectResult: photoAccessHelper.PhotoSelectResult) => {
55     if (err) {
56       console.error('MS_LITE_ERR: PhotoViewPicker.select failed with err: ' + JSON.stringify(err));
57       return;
58     }
59     console.info('MS_LITE_LOG: PhotoViewPicker.select successfully, ' +
60       'photoSelectResult uri: ' + JSON.stringify(photoSelectResult));
61     uris = photoSelectResult.photoUris;
62     console.info('MS_LITE_LOG: uri: ' + uris);
63   })
64   ```
65
662. Based on the input image size, call [@ohos.multimedia.image](../../reference/apis-image-kit/js-apis-image.md) and [@ohos.file.fs](../../reference/apis-core-file-kit/js-apis-file-fs.md) to perform operations such as cropping the image, obtain the image buffer, and standardizing the image.
67
68   ```ts
69   import { image } from '@kit.ImageKit';
70   import { fileIo } from '@kit.CoreFileKit';
71   
72   let modelInputHeight: number = 224;
73   let modelInputWidth: number = 224;
74   
75   // Based on the specified URI, call fileIo.openSync to open the file to obtain the FD.
76   let file = fileIo.openSync(this.uris[0], fileIo.OpenMode.READ_ONLY);
77   console.info('MS_LITE_LOG: file fd: ' + file.fd);
78   
79   // Based on the FD, call fileIo.readSync to read the data in the file.
80   let inputBuffer = new ArrayBuffer(4096000);
81   let readLen = fileIo.readSync(file.fd, inputBuffer);
82   console.info('MS_LITE_LOG: readSync data to file succeed and inputBuffer size is:' + readLen);
83   
84   // Perform image preprocessing through PixelMap.
85   let imageSource = image.createImageSource(file.fd);
86   imageSource.createPixelMap().then((pixelMap) => {
87     pixelMap.getImageInfo().then((info) => {
88       console.info('MS_LITE_LOG: info.width = ' + info.size.width);
89       console.info('MS_LITE_LOG: info.height = ' + info.size.height);
90       // Crop the image based on the input image size and obtain the image buffer readBuffer.
91       pixelMap.scale(256.0 / info.size.width, 256.0 / info.size.height).then(() => {
92         pixelMap.crop(
93           { x: 16, y: 16, size: { height: modelInputHeight, width: modelInputWidth } }
94         ).then(async () => {
95           let info = await pixelMap.getImageInfo();
96           console.info('MS_LITE_LOG: crop info.width = ' + info.size.width);
97           console.info('MS_LITE_LOG: crop info.height = ' + info.size.height);
98           // Set the size of readBuffer.
99           let readBuffer = new ArrayBuffer(modelInputHeight * modelInputWidth * 4);
100           await pixelMap.readPixelsToBuffer(readBuffer);
101           console.info('MS_LITE_LOG: Succeeded in reading image pixel data, buffer: ' +
102           readBuffer.byteLength);
103           // Convert readBuffer to the float32 format, and standardize the image.
104           const imageArr = new Uint8Array(
105             readBuffer.slice(0, modelInputHeight * modelInputWidth * 4));
106           console.info('MS_LITE_LOG: imageArr length: ' + imageArr.length);
107           let means = [0.485, 0.456, 0.406];
108           let stds = [0.229, 0.224, 0.225];
109           let float32View = new Float32Array(modelInputHeight * modelInputWidth * 3);
110           let index = 0;
111           for (let i = 0; i < imageArr.length; i++) {
112             if ((i + 1) % 4 == 0) {
113               float32View[index] = (imageArr[i - 3] / 255.0 - means[0]) / stds[0]; // B
114               float32View[index+1] = (imageArr[i - 2] / 255.0 - means[1]) / stds[1]; // G
115               float32View[index+2] = (imageArr[i - 1] / 255.0 - means[2]) / stds[2]; // R
116               index += 3;
117             }
118           }
119           console.info('MS_LITE_LOG: float32View length: ' + float32View.length);
120           let printStr = 'float32View data:';
121           for (let i = 0; i < 20; i++) {
122             printStr += ' ' + float32View[i];
123           }
124           console.info('MS_LITE_LOG: float32View data: ' + printStr);
125         })
126       })
127     });
128   });
129   ```
130
131#### Writing Inference Code
132
133Call [MindSpore](../../reference/apis-mindspore-lite-kit/_mind_spore.md) to implement inference on the device. The operation process is as follows:
134
1351. Include the corresponding header file.
136
137   ```c++
138   #include <iostream>
139   #include <sstream>
140   #include <stdlib.h>
141   #include <hilog/log.h>
142   #include <rawfile/raw_file_manager.h>
143   #include <mindspore/types.h>
144   #include <mindspore/model.h>
145   #include <mindspore/context.h>
146   #include <mindspore/status.h>
147   #include <mindspore/tensor.h>
148   #include "napi/native_api.h"
149   ```
150
1512. Read the model file.
152
153   ```c++
154   #define LOGI(...) ((void)OH_LOG_Print(LOG_APP, LOG_INFO, LOG_DOMAIN, "[MSLiteNapi]", __VA_ARGS__))
155   #define LOGD(...) ((void)OH_LOG_Print(LOG_APP, LOG_DEBUG, LOG_DOMAIN, "[MSLiteNapi]", __VA_ARGS__))
156   #define LOGW(...) ((void)OH_LOG_Print(LOG_APP, LOG_WARN, LOG_DOMAIN, "[MSLiteNapi]", __VA_ARGS__))
157   #define LOGE(...) ((void)OH_LOG_Print(LOG_APP, LOG_ERROR, LOG_DOMAIN, "[MSLiteNapi]", __VA_ARGS__))
158   
159   void *ReadModelFile(NativeResourceManager *nativeResourceManager, const std::string &modelName, size_t *modelSize) {
160       auto rawFile = OH_ResourceManager_OpenRawFile(nativeResourceManager, modelName.c_str());
161       if (rawFile == nullptr) {
162           LOGE("MS_LITE_ERR: Open model file failed");
163           return nullptr;
164       }
165       long fileSize = OH_ResourceManager_GetRawFileSize(rawFile);
166       void *modelBuffer = malloc(fileSize);
167       if (modelBuffer == nullptr) {
168           LOGE("MS_LITE_ERR: OH_ResourceManager_ReadRawFile failed");
169       }
170       int ret = OH_ResourceManager_ReadRawFile(rawFile, modelBuffer, fileSize);
171       if (ret == 0) {
172           LOGI("MS_LITE_LOG: OH_ResourceManager_ReadRawFile failed");
173           OH_ResourceManager_CloseRawFile(rawFile);
174           return nullptr;
175       }
176       OH_ResourceManager_CloseRawFile(rawFile);
177       *modelSize = fileSize;
178       return modelBuffer;
179   }
180   ```
181   
1823. Create a context, set parameters such as the number of threads and device type, and load the model.
183
184   ```c++
185   void DestroyModelBuffer(void **buffer) {
186       if (buffer == nullptr) {
187           return;
188       }
189       free(*buffer);
190       *buffer = nullptr;
191   }
192   
193   OH_AI_ModelHandle CreateMSLiteModel(void *modelBuffer, size_t modelSize) {
194       // Set executing context for model.
195       auto context = OH_AI_ContextCreate();
196       if (context == nullptr) {
197           DestroyModelBuffer(&modelBuffer);
198           LOGE("MS_LITE_ERR: Create MSLite context failed.\n");
199           return nullptr;
200       }
201       auto cpu_device_info = OH_AI_DeviceInfoCreate(OH_AI_DEVICETYPE_CPU);
202   
203       OH_AI_DeviceInfoSetEnableFP16(cpu_device_info, true);
204       OH_AI_ContextAddDeviceInfo(context, cpu_device_info);
205   
206       // Create model
207       auto model = OH_AI_ModelCreate();
208       if (model == nullptr) {
209           DestroyModelBuffer(&modelBuffer);
210           LOGE("MS_LITE_ERR: Allocate MSLite Model failed.\n");
211           return nullptr;
212       }
213   
214       // Build model object
215       auto build_ret = OH_AI_ModelBuild(model, modelBuffer, modelSize, OH_AI_MODELTYPE_MINDIR, context);
216       DestroyModelBuffer(&modelBuffer);
217       if (build_ret != OH_AI_STATUS_SUCCESS) {
218           OH_AI_ModelDestroy(&model);
219           LOGE("MS_LITE_ERR: Build MSLite model failed.\n");
220           return nullptr;
221       }
222       LOGI("MS_LITE_LOG: Build MSLite model success.\n");
223       return model;
224   }
225   ```
226
2274. Set the model input data and perform model inference.
228
229   ```c++
230   constexpr int K_NUM_PRINT_OF_OUT_DATA = 20;
231   
232   // Set the model input data.
233   int FillInputTensor(OH_AI_TensorHandle input, std::vector<float> input_data) {
234       if (OH_AI_TensorGetDataType(input) == OH_AI_DATATYPE_NUMBERTYPE_FLOAT32) {
235           float *data = (float *)OH_AI_TensorGetMutableData(input);
236           for (size_t i = 0; i < OH_AI_TensorGetElementNum(input); i++) {
237               data[i] = input_data[i];
238           }
239           return OH_AI_STATUS_SUCCESS;
240       } else {
241           return OH_AI_STATUS_LITE_ERROR;
242       }
243   }
244   
245   // Execute model inference.
246   int RunMSLiteModel(OH_AI_ModelHandle model, std::vector<float> input_data) {
247       // Set input data for model.
248       auto inputs = OH_AI_ModelGetInputs(model);
249   
250       auto ret = FillInputTensor(inputs.handle_list[0], input_data);
251       if (ret != OH_AI_STATUS_SUCCESS) {
252           LOGE("MS_LITE_ERR: RunMSLiteModel set input error.\n");
253           return OH_AI_STATUS_LITE_ERROR;
254       }
255       // Get model output.
256       auto outputs = OH_AI_ModelGetOutputs(model);
257       // Predict model.
258       auto predict_ret = OH_AI_ModelPredict(model, inputs, &outputs, nullptr, nullptr);
259       if (predict_ret != OH_AI_STATUS_SUCCESS) {
260           OH_AI_ModelDestroy(&model);
261           LOGE("MS_LITE_ERR: MSLite Predict error.\n");
262           return OH_AI_STATUS_LITE_ERROR;
263       }
264       LOGI("MS_LITE_LOG: Run MSLite model Predict success.\n");
265       // Print output tensor data.
266       LOGI("MS_LITE_LOG: Get model outputs:\n");
267       for (size_t i = 0; i < outputs.handle_num; i++) {
268           auto tensor = outputs.handle_list[i];
269           LOGI("MS_LITE_LOG: - Tensor %{public}d name is: %{public}s.\n", static_cast<int>(i),
270                OH_AI_TensorGetName(tensor));
271           LOGI("MS_LITE_LOG: - Tensor %{public}d size is: %{public}d.\n", static_cast<int>(i),
272                (int)OH_AI_TensorGetDataSize(tensor));
273           LOGI("MS_LITE_LOG: - Tensor data is:\n");
274           auto out_data = reinterpret_cast<const float *>(OH_AI_TensorGetData(tensor));
275           std::stringstream outStr;
276           for (int i = 0; (i < OH_AI_TensorGetElementNum(tensor)) && (i <= K_NUM_PRINT_OF_OUT_DATA); i++) {
277               outStr << out_data[i] << " ";
278           }
279           LOGI("MS_LITE_LOG: %{public}s", outStr.str().c_str());
280       }
281       return OH_AI_STATUS_SUCCESS;
282   }
283   ```
284
2855. Implement a complete model inference process.
286
287   ```c++
288   static napi_value RunDemo(napi_env env, napi_callback_info info) {
289       LOGI("MS_LITE_LOG: Enter runDemo()");
290       napi_value error_ret;
291       napi_create_int32(env, -1, &error_ret);
292       // Process the input data.
293       size_t argc = 2;
294       napi_value argv[2] = {nullptr};
295       napi_get_cb_info(env, info, &argc, argv, nullptr, nullptr);
296       bool isArray = false;
297       napi_is_array(env, argv[0], &isArray);
298       uint32_t length = 0;
299       // Obtain the length of the array.
300       napi_get_array_length(env, argv[0], &length);
301   	LOGI("MS_LITE_LOG: argv array length = %{public}d", length);
302       std::vector<float> input_data;
303       double param = 0;
304       for (int i = 0; i < length; i++) {
305           napi_value value;
306           napi_get_element(env, argv[0], i, &value);
307           napi_get_value_double(env, value, &param);
308           input_data.push_back(static_cast<float>(param));
309       }
310       std::stringstream outstr;
311       for (int i = 0; i < K_NUM_PRINT_OF_OUT_DATA; i++) {
312           outstr << input_data[i] << " ";
313       }
314   	LOGI("MS_LITE_LOG: input_data = %{public}s", outstr.str().c_str());
315       // Read model file
316       const std::string modelName = "mobilenetv2.ms";
317       LOGI("MS_LITE_LOG: Run model: %{public}s", modelName.c_str());
318       size_t modelSize;
319       auto resourcesManager = OH_ResourceManager_InitNativeResourceManager(env, argv[1]);
320       auto modelBuffer = ReadModelFile(resourcesManager, modelName, &modelSize);
321       if (modelBuffer == nullptr) {
322           LOGE("MS_LITE_ERR: Read model failed");
323           return error_ret;
324       }
325       LOGI("MS_LITE_LOG: Read model file success");
326       auto model = CreateMSLiteModel(modelBuffer, modelSize);
327       if (model == nullptr) {
328           OH_AI_ModelDestroy(&model);
329           LOGE("MS_LITE_ERR: MSLiteFwk Build model failed.\n");
330           return error_ret;
331       }
332       int ret = RunMSLiteModel(model, input_data);
333       if (ret != OH_AI_STATUS_SUCCESS) {
334           OH_AI_ModelDestroy(&model);
335           LOGE("MS_LITE_ERR: RunMSLiteModel failed.\n");
336           return error_ret;
337       }
338       napi_value out_data;
339       napi_create_array(env, &out_data);
340       auto outputs = OH_AI_ModelGetOutputs(model);
341       OH_AI_TensorHandle output_0 = outputs.handle_list[0];
342       float *output0Data = reinterpret_cast<float *>(OH_AI_TensorGetMutableData(output_0));
343       for (size_t i = 0; i < OH_AI_TensorGetElementNum(output_0); i++) {
344           napi_value element;
345           napi_create_double(env, static_cast<double>(output0Data[i]), &element);
346           napi_set_element(env, out_data, i, element);
347       }
348       OH_AI_ModelDestroy(&model);
349       LOGI("MS_LITE_LOG: Exit runDemo()");
350       return out_data;
351   }
352   ```
353
3546. Write the **CMake** script to link the MindSpore Lite dynamic library.
355
356   ```c++
357   # the minimum version of CMake.
358   cmake_minimum_required(VERSION 3.4.1)
359   project(MindSporeLiteCDemo)
360   
361   set(NATIVERENDER_ROOT_PATH ${CMAKE_CURRENT_SOURCE_DIR})
362   
363   if(DEFINED PACKAGE_FIND_FILE)
364       include(${PACKAGE_FIND_FILE})
365   endif()
366   
367   include_directories(${NATIVERENDER_ROOT_PATH}
368                       ${NATIVERENDER_ROOT_PATH}/include)
369   
370   add_library(entry SHARED mslite_napi.cpp)
371   target_link_libraries(entry PUBLIC mindspore_lite_ndk)
372   target_link_libraries(entry PUBLIC hilog_ndk.z)
373   target_link_libraries(entry PUBLIC rawfile.z)
374   target_link_libraries(entry PUBLIC ace_napi.z)
375   ```
376
377#### Use N-APIs to encapsulate the C++ dynamic library into an ArkTS module.
378
3791. In **entry/src/main/cpp/types/libentry/Index.d.ts**, define the ArkTS API **runDemo ()**. The content is as follows:
380
381   ```ts
382   export const runDemo: (a: number[], b:Object) => Array<number>;
383   ```
384
3852. In the **oh-package.json5** file, associate the API with the .so file to form a complete ArkTS module.
386
387   ```json
388   {
389     "name": "libentry.so",
390     "types": "./Index.d.ts",
391     "version": "1.0.0",
392     "description": "MindSpore Lite inference module"
393   }
394   ```
395
396#### Invoke the encapsulated ArkTS module for inference and output the result.
397
398In **entry/src/main/ets/pages/Index.ets**, call the encapsulated ArkTS module to process the inference result.
399
400```ts
401import msliteNapi from 'libentry.so'
402import { resourceManager } from '@kit.LocalizationKit';
403
404let resMgr: resourceManager.ResourceManager = getContext().getApplicationContext().resourceManager;
405let max: number = 0;
406let maxIndex: number = 0;
407let maxArray: Array<number> = [];
408let maxIndexArray: Array<number> = [];
409
410// Call the runDemo function of C++. The buffer data of the input image is stored in float32View after preprocessing. For details, see Image Input and Preprocessing.
411console.info('MS_LITE_LOG: *** Start MSLite Demo ***');
412let output: Array<number> = msliteNapi.runDemo(Array.from(float32View), resMgr);
413// Obtain the maximum number of categories.
414this.max = 0;
415this.maxIndex = 0;
416this.maxArray = [];
417this.maxIndexArray = [];
418let newArray = output.filter(value => value !== max);
419for (let n = 0; n < 5; n++) {
420  max = output[0];
421  maxIndex = 0;
422  for (let m = 0; m < newArray.length; m++) {
423    if (newArray[m] > max) {
424      max = newArray[m];
425      maxIndex = m;
426    }
427  }
428  maxArray.push(Math.round(this.max * 10000));
429  maxIndexArray.push(this.maxIndex);
430  // Call the array filter function.
431  newArray = newArray.filter(value => value !== max);
432}
433console.info('MS_LITE_LOG: max:' + this.maxArray);
434console.info('MS_LITE_LOG: maxIndex:' + this.maxIndexArray);
435console.info('MS_LITE_LOG: *** Finished MSLite Demo ***');
436```
437
438### Debugging and Verification
439
4401. On DevEco Studio, connect to the device, click **Run entry**, and build your own HAP. 
441
442   ```shell
443   Launching com.samples.mindsporelitecdemo
444   $ hdc shell aa force-stop com.samples.mindsporelitecdemo
445   $ hdc shell mkdir data/local/tmp/xxx
446   $ hdc file send C:\Users\xxx\MindSporeLiteCDemo\entry\build\default\outputs\default\entry-default-signed.hap "data/local/tmp/xxx"
447   $ hdc shell bm install -p data/local/tmp/xxx
448   $ hdc shell rm -rf data/local/tmp/xxx
449   $ hdc shell aa start -a EntryAbility -b com.samples.mindsporelitecdemo
450   ```
451
4522. Touch the **photo** button on the device screen, select an image, and touch **OK**. The classification result of the selected image is displayed on the device screen. In the log printing result, filter images by the keyword **MS_LITE**. The following information is displayed:
453
454   ```verilog
455   08-05 17:15:52.001   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: PhotoViewPicker.select successfully, photoSelectResult uri: {"photoUris":["file://media/Photo/13/IMG_1501955351_012/plant.jpg"]}
456   ...
457   08-05 17:15:52.627   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: crop info.width = 224
458   08-05 17:15:52.627   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: crop info.height = 224
459   08-05 17:15:52.628   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: Succeeded in reading image pixel data, buffer: 200704
460   08-05 17:15:52.971   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: float32View data: float32View data: 1.2385478019714355 1.308123230934143 1.4722440242767334 1.2385478019714355 1.308123230934143 1.4722440242767334 1.2385478019714355 1.308123230934143 1.4722440242767334 1.2385478019714355 1.308123230934143 1.4722440242767334 1.2385478019714355 1.308123230934143 1.4722440242767334 1.2385478019714355 1.308123230934143 1.4722440242767334 1.2385478019714355 1.308123230934143
461   08-05 17:15:52.971   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: *** Start MSLite Demo ***
462   08-05 17:15:53.454   4684-4684    A00000/[MSLiteNapi]            pid-4684              I     MS_LITE_LOG: Build MSLite model success.
463   08-05 17:15:53.753   4684-4684    A00000/[MSLiteNapi]            pid-4684              I     MS_LITE_LOG: Run MSLite model Predict success.
464   08-05 17:15:53.753   4684-4684    A00000/[MSLiteNapi]            pid-4684              I     MS_LITE_LOG: Get model outputs:
465   08-05 17:15:53.753   4684-4684    A00000/[MSLiteNapi]            pid-4684              I     MS_LITE_LOG: - Tensor 0 name is: Default/head-MobileNetV2Head/Sigmoid-op466.
466   08-05 17:15:53.753   4684-4684    A00000/[MSLiteNapi]            pid-4684              I     MS_LITE_LOG: - Tensor data is:
467   08-05 17:15:53.753   4684-4684    A00000/[MSLiteNapi]            pid-4684              I     MS_LITE_LOG: 3.43385e-06 1.40285e-05 9.11969e-07 4.91007e-05 9.50266e-07 3.94537e-07 0.0434676 3.97196e-05 0.00054832 0.000246202 1.576e-05 3.6494e-06 1.23553e-05 0.196977 5.3028e-05 3.29346e-05 4.90475e-07 1.66109e-06 7.03273e-06 8.83677e-07 3.1365e-06
468   08-05 17:15:53.781   4684-4684    A03d00/JSAPP                   pid-4684              W     MS_LITE_WARN: output length =  500 ;value =  0.0000034338463592575863,0.000014028532859811094,9.119685273617506e-7,0.000049100715841632336,9.502661555416125e-7,3.945370394831116e-7,0.04346757382154465,0.00003971960904891603,0.0005483203567564487,0.00024620210751891136,0.000015759984307806008,0.0000036493988773145247,0.00001235533181898063,0.1969769448041916,0.000053027983085485175,0.000032934600312728435,4.904751449430478e-7,0.0000016610861166554969,0.000007032729172351537,8.836767619868624e-7
469   08-05 17:15:53.831   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: max:9497,7756,1970,435,46
470   08-05 17:15:53.831   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: maxIndex:323,46,13,6,349
471   08-05 17:15:53.831   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: *** Finished MSLite Demo ***
472   ```
473
474
475### Effects
476
477Touch the **photo** button on the device screen, select an image, and touch **OK**. The top 4 categories of the image are displayed below the image.
478
479<img src="figures/stepc1.png"  width="20%"/>     <img src="figures/step2.png" width="20%"/>     <img src="figures/step3.png" width="20%"/>     <img src="figures/stepc4.png" width="20%"/>
480
481
482