Deep Neural Net (DNN) Model Inference Task
(Redirected from Deep NNet Inference)
Jump to navigation
Jump to search
A Deep Neural Net (DNN) Model Inference Task is a neural network model inference task (to make predictions or decisions based on new, unseen data) for deep NNets.
- Context:
- It can (typically) involve processing input data through multiple layers of the neural network.
- It can (typically) be executed on various hardware platforms, ranging from high-performance GPUs and TPUs.
- It can (often) require optimization techniques such as quantization technique, pruning technique, and compression technique to improve inference speed and reduce memory footprint.
- It can (typically) be supported by a DNN Model Inference System (that implements a DNN model inference algorithm).
- ...
- Example(s):
- ...
- Counter-Example(s):
- DNN Training, where the network learns from a dataset by adjusting its weights through backpropagation based on a loss function.
- Traditional machine learning inference methods, such as: Decision Tree Inference and Logistic Regression Inference.
- See: Model Optimization, Hardware Acceleration, Real-Time Inference, AI Application Domains, Groq.