How to Use Specific Backend during Inference

ONE runtime has many ways to use specific backend during inference

Using NNFW API

nnfw_set_available_backends

  • Multiple backends can be set and they must be separated by a semicolon (ex: “acl_cl;cpu”).
  • For each backend string, libbackend_{backend}.so will be dynamically loaded during nnfw_prepare.
  • Among the multiple backends, the 1st element is used as the default backend.

nnfw_set_op_backend

  • The backend for op has higher priority than available backends specified by nnfw_set_available_backends.

Using Environment Variable

1. BACKENDS

  • Same as nnfw_set_available_backends
  • Example
BACKENDS=cpu ./Product/out/bin/onert_run ...

2. OP_BACKEND_[OP_TYPE]

  • Same as nnfw_set_op_backend
  • Set backend for specific operator type
  • Example
    • Execute Conv2D operator on ruy backend and others on cpu backend
OP_BACKEND_Conv2D=ruy BACKENDS="cpu;ruy" ./Product/out/bin/onert_run ...

3. OP_BACKEND_MAP

  • Set backend for specific operator by its index
  • Format : <op_id>=<backend>;<op_id>=<backend>...
  • Example
    • Execute operator 10 on acl_cl backend and others on acl_neon backend
OP_BACKEND_MAP="10=acl_cl" BACKENDS="acl_neon;acl_cl" ./Product/out/bin/onert_run ...