Key Points
- Windows Machine Learning (ML) enables developers to run ONNX AI models locally on Windows PCs via the ONNX Runtime, with automatic execution provider management for different hardware.
- The ONNX Runtime provides a shared Windows-wide copy, plus the ability to dynamically download execution providers, reducing app size and allowing for broader hardware support.
- Windows ML supports various hardware configurations, including CPUs, GPUs, and NPUs, and is available on all Windows 11 PCs (x64 and ARM64) with any hardware configuration.
Microsoft has introduced Windows Machine Learning (ML), a platform that enables developers to run ONNX AI models locally on Windows PCs. This is made possible through the ONNX Runtime, which provides a shared Windows-wide copy, allowing developers to dynamically download execution providers (EPs) for different hardware configurations. The ONNX Runtime can be used with models from various frameworks, including PyTorch, Tensorflow/Keras, TFLite, and scikit-learn.
The key benefits of Windows ML include dynamically getting the latest EPs, using a shared ONNX Runtime, and smaller downloads and installs. This reduces the need for developers to bundle large EPs and the ONNX Runtime in their apps, resulting in broader hardware support. Windows ML is available on all Windows 11 PCs (x64 and ARM64) with any hardware configuration, making it a versatile solution for developers.
To use Windows ML, developers need to ensure their system meets the necessary requirements, including Windows 11 version 24H2 (build 26100) or later, and either x64 or ARM64 architecture. The platform supports various hardware configurations, including CPUs, integrated/discrete GPUs, and NPUs.
An execution provider (EP) is a component that enables hardware-specific optimizations for machine learning (ML) operations. Windows ML includes a copy of the ONNX Runtime and allows developers to dynamically download vendor-specific EPs, optimizing model inference across different hardware configurations. The platform also provides automatic deployment, eliminating the need to bundle EPs for specific hardware vendors or create separate app builds for different execution providers.
Windows ML has undergone significant performance optimization, working directly with dedicated execution providers for GPUs and NPUs to deliver best-in-class performance. The platform also provides flexible options for managing AI models, including the Model Catalog and Local models. Additionally, Windows ML serves as the foundation for the broader Windows AI platform, providing built-in models for common tasks, ready-to-use AI models, and direct API access for advanced scenarios.
Developers can convert models from other formats to ONNX to use them with Windows ML, and the platform provides a flexible way to access machine learning (ML) execution providers. Windows ML also integrates with the Windows AI ecosystem, providing a range of tools and resources for developers. For those who encounter issues or have suggestions, Microsoft encourages feedback through the Windows App SDK GitHub. As Windows ML continues to evolve, it is expected to play a significant role in the development of AI-powered applications on Windows PCs.
Read the rest: Source Link
You might also like: Try AutoCAD 2026 for Windows, best free FTP Clients on Windows & browse the best Surface Laptops to buy.
Remember to like our facebook and our twitter @WindowsMode for a chance to win a free Surface every month.
Discover more from Windows Mode
Subscribe to get the latest posts sent to your email.