Table of Contents

Enum InferenceCoreMLFlags

Namespace
MaaFramework.Binding
Assembly
MaaFramework.Binding.dll

InferenceCoreMLFlags are bool options we want to set for CoreML EP.

[Flags]
public enum InferenceCoreMLFlags

Fields

Auto = -1

Using in Binding.

CreateMLProgram = 16

Create an MLProgram.

By default it will create a NeuralNetwork model. Requires Core ML 5 or later.
EnableOnSubgraph = 2

Enable CoreML EP on subgraph.

Last = 32

Keep Last at the end of the enum definition.

And assign the last CoreMLFlag to it.
None = 0

Do not use CoreML Execution Provider.

OnlyAllowStaticInputShapes = 8

Only allow CoreML EP to take nodes with inputs with static shapes.

By default it will also allow inputs with dynamic shapes.

However, the performance may be negatively impacted if inputs have dynamic shapes.
OnlyEnableDeviceWithAne = 4

By default, CoreML Execution Provider will be enabled for all compatible Apple devices.

Enable this option will only enable CoreML EP for Apple devices with ANE (Apple Neural Engine).

Please note, enable this option does not guarantee the entire model to be executed using ANE only.
UseCpuAndGpu = 32

See: MLComputeUnits.

there are four compute units:

MLComputeUnitsCPUAndNeuralEngine | MLComputeUnitsCPUAndGPU | MLComputeUnitsCPUOnly | MLComputeUnitsAll

different CU will have different performance and power consumption.
UseCpuOnly = 1

Using CPU only in CoreML EP, this may decrease the perf but will provide reference output value without precision loss, which is useful for validation.