tree: 94f5d454a81f9d38d4402a092e80aa6262d38386 [path history] [tgz]
  1. dbus/
  2. docs/
  3. init/
  4. mojom/
  5. seccomp/
  6. daemon.cc
  7. daemon.h
  8. graph_executor_impl.cc
  9. graph_executor_impl.h
  10. graph_executor_impl_test.cc
  11. machine_learning_service_impl.cc
  12. machine_learning_service_impl.h
  13. machine_learning_service_impl_test.cc
  14. main.cc
  15. metrics.cc
  16. metrics.h
  17. ml.gyp
  18. model_impl.cc
  19. model_impl.h
  20. model_impl_test.cc
  21. model_metadata.cc
  22. model_metadata.h
  23. OWNERS
  24. README.md
  25. tensor_view.cc
  26. tensor_view.h
  27. test_utils.cc
  28. test_utils.h
  29. testrunner.cc
ml/README.md

CrosML: Chrome OS Machine Learning Service

Summary

The Machine Learning (ML) Service provides a common runtime for evaluating machine learning models on device. The service wraps the TensorFlow Lite runtime and provides infrastructure for deployment of trained models. Chromium communicates with ML Service via a Mojo Interface.

How to use ML Service

You need to provide your trained models to ML Service by following these instructions. You can then load and use your model from Chromium using the client library provided at //chromeos/services/machine_learning/public/cpp/.

Metrics

The following metrics are currently recorded by the daemon process in order to understand its resource costs in the wild:

  • MachineLearningService.MojoConnectionEvent: Success/failure of the D-Bus->Mojo bootstrap.
  • MachineLearningService.PrivateMemoryKb: Private (unshared) memory footprint every 5 minutes.
  • MachineLearningService.PeakPrivateMemoryKb: Peak value of MachineLearningService.PrivateMemoryKb per 24 hour period. Daemon code can also call ml::Metrics::UpdateCumulativeMetricsNow() at any time to take a peak-memory observation, to catch short-lived memory usage spikes.
  • MachineLearningService.CpuUsageMilliPercent: Fraction of total CPU resources consumed by the daemon every 5 minutes, in units of milli-percent (1/100,000).

TODO(amoylan): Additional metrics to be added, ideally per-model, to understand levels of performance that clients are getting:

  • LoadModel time
  • CreateGraphExecutor time
  • Model inference time

Design docs