Setup CNTK on your machine
Pages 184
- Home
- Activation Functions
- Adapt a model I trained on one task to another
- Articles
- Associate an id with a prediction
- Avoid AddSequence Exception
- Avoid the error CURAND failure 201
- Baseline Metrics
- BatchNormalization
- Binary Operations
- BrainScript Network Builder
- BS Basic Concepts
- BS Expressions
- BS Functions
- BS Model Editing
- Build a constant 3D tensor
- CloneFunction
- CNTK 1bit SGD License
- CNTK 2.0 Examples
- CNTK 2.0 Python API
- CNTK 2.0 Setup
- CNTK 2.0 Setup from Sources
- CNTK Binary Download and Configuration
- CNTK Binary Download and Manual Configuration
- CNTK Docker Containers
- CNTK Eval Examples
- CNTK Evaluate Hidden Layers
- CNTK Evaluate Image Transforms
- CNTK Evaluate Multiple Models
- CNTK Evaluation Overview
- CNTK Evaluation using cntk.exe
- CNTK FAQ
- CNTK Library API
- CNTK Library Evaluation on Linux
- CNTK Library Evaluation on Windows
- CNTK Library Evaluation Overview
- CNTK model format
- CNTK move to Cuda8
- CNTK on Azure
- CNTK Python known issues and limitations
- CNTK usage overview
- CNTK_1_5_Release_Notes
- CNTK_1_6_Release_Notes
- CNTK_1_7_1_Release_Notes
- CNTK_1_7_2_Release_Notes
- CNTK_1_7_Release_Notes
- CNTK_2_0_Beta_1_Release_Notes
- CNTK_2_0_Beta_2_Release_Notes
- CNTK_2_0_Beta_3_Release_Notes
- CNTK_2_0_Beta_4_Release_Notes
- CNTK_2_0_Beta_5_Release_Notes
- CNTK_2_0_Beta_6_Release_Notes
- CNTK_2_0_Beta_7_Release_Notes
- CNTKBinary Reader
- CNTKTextFormat Reader
- Coding Guidelines
- Command line parsing rules
- Compatible dimensions in reader and config
- Conference Appearances
- Config file overview
- Continue training from a previously saved model
- Contributing to CNTK
- ConvertDBN command
- Convolution
- Deal with the 'No Output nodes found' error
- Deal with the error 'No node named 'x'; skipping'
- Deal with the error 'Reached the maximum number of allowed errors'
- Debugging CNTK source code in Visual Studio
- Debugging CNTK's GPU source code in Visual Studio
- Deep Crossing on CNTK
- Developing and Testing
- Do early stopping
- Dropout
- Dropout during evaluation
- Enabling 1bit SGD
- EvalDLL Evaluation on Linux
- EvalDLL Evaluation on Windows
- EvalDLL Evaluation Overview
- Evaluate a model in an Azure WebApi
- Evaluate a saved convolutional network
- Evaluate my newly trained model but output the activations at an intermediate layer
- Examples
- Express a gating mechanism
- Express a softmax over a dynamic axis
- Express a softmax with a temperature parameter
- Express the error rate of my binary classifier
- Full Function Reference
- Gather and Scatter
- Get nice syntax highlighting for BrainScript config files
- Get started in sequence to sequence modelling
- Get things to work correctly when I take the last element of a sequence
- GRUs on CNTK with BrainScript
- Hands On Labs Image Recognition
- Hands On Labs Language Understanding
- How do I
- How do I in BrainScript
- How do I in Python
- How do I run Eval in Azure
- How do I use a trained model as a feature extractor
- How to Test
- HTKMLF Reader
- If Operation
- Image reader
- Implement an attention mechanism
- Implement Zoneout
- Inputs
- Interpret the training loss
- Interpret the use of MinibatchSource.next_minibatch
- Interrogate the dimensions of internal layers of a network from within the Python API
- Introspect or inspect or list model input variables
- KDD 2016 Tutorial
- Layer wise training
- Layers Library Reference
- Layers Reference
- LM sequence reader
- Loss Functions and Metrics
- LU sequence reader
- Managed EvalDLL API
- Monitor the error on a held out set during training
- Monitor the error on a held out set during training or do Cross Validation (CV) during training
- Multiple GPUs and machines
- Native CNTK Library Eval Interface
- Native EvalDLL API
- News
- NuGet Package
- Nuget Package for Evaluation
- Object Detection using Fast R CNN
- OptimizedRNNStack
- Parameters And Constants
- Plot command
- Pooling
- Port projection of 1D input to 1D output from Python API to C API
- Post Batch Normalization Statistics
- Presentations
- project a 1D input of dim inputDim to a 1D output of dim outputDim
- Read and modify the training weights from Python
- Reader block
- Recommended CNTK 2.0 Setup
- Records
- Recurrent Neural Networks with CNTK and applications to the world of ranking
- Reduction Operations
- Relate alpha, beta1, beta2 and epsilon to learning rate and momentum in adam_sgd
- Relate alpha, beta1, beta2 and epsilon to learning rate and momentum in adam_sgd optimizer
- Restrict a prediction to a bounded interval
- Sequence to Sequence – Deep Recurrent Neural Networks in CNTK – Part 1
- Sequence to Sequence – Deep Recurrent Neural Networks in CNTK – Part 2
- Sequence to Sequence – Deep Recurrent Neural Networks in CNTK – Part 2 – Machine Translation
- Sequential
- Set the verbosity or traceLevel from Python
- Setup CNTK on Linux
- Setup CNTK on Windows
- Setup CNTK on your machine
- Setup Linux Binary Manual
- Setup Linux Binary Script
- Setup Windows Binary Manual
- Setup Windows Binary Script
- SGD Block
- Simple Network Builder
- Special Nodes
- Specify multiple label streams with the HTKMLFReader
- Test Configurations
- Times and TransposeTimes
- Top level commands
- Top level configurations
- Train a DSSM (or a convolutional DSSM) model
- Train a multilabel classifier
- Train a regression model on images
- Train two or more models jointly
- Train with a multitask objective
- Train with a weighted loss
- Train, Test, Eval
- Troubleshoot CNTK
- Tutorial
- Tutorial2
- Tutorials
- Tutorials, Examples, etc..
- UCI Fast Reader
- Unary Operations
- Understanding and Extending Readers
- Use an already trained network multiple times inside a larger network
- Use built in readers with multiple inputs
- Using CNTK with BrainScript
- Using CNTK with multiple GPUs and or machines
- Variables
- Show 169 more pages…
- Home
- What's new
- Binary Install or Update:
- Python API / Docs
- FAQ
- How do I...
- Troubleshoot CNTK
Getting Started
Additional Documentation
How to use CNTK
Using CNTK Models in Your Code
Advanced topics
- Command Line Parsing Rules
- Top-level Commands
- Working with Sequences
- Plot Command
- ConvertDBN Command
- Baseline Metrics
Licenses
Source Code & Development
Clone this wiki locally
CNTK Setup
Install CNTK for the first time or update to a new version
The Microsoft Cognitive Toolkit (CNTK) supports both 64-bit Windows and 64-bit Linux platforms.
You can install the complete source code of CNTK and build the binaries on your machine, but we also provide regular binary drops of the CNTK executables, including sample data and sample models.
Binary Installation (or Update) of CNTK
If you just want to download and install the latest precompiled binaries to your machine, follow the instructions here:
| Windows | Linux |
|---|---|
| Script-driven installation | Script-driven installation |
| Manual Installation | Manual Installation |
| Docker Installation |
Installation and building of the CNTK codebase
If you want to take a look at the CNTK source code, compile CNTK yourself, make changes to the CNTK codebase, and contribute these changes back to the community, these are the pages for you:
- Setting up CNTK from Source Code on Windows
- Setting up CNTK from Source Code on Linux
- Enabling 1bit SGD
- Developing and Testing
- CNTK Production Test Configurations
Installation as Azure Virtual Machine or Linux Docker container
You may use CNTK via Microsoft Azure Virtual Machine offering (Windows and Linux) or install it as a Docker container (Linux). See the corresponding sections:
Usage and Samples
If you want to learn more about CNTK usage and how to execute the provided samples, you find more information on the following pages