Optimize TensorFlow Models For Deployment with TensorRT
half-circle
vector

Optimize TensorFlow Models For Deployment with TensorRT

أبرز محتويات الدورة

This is a hands-on, guided project on optimizing your TensorFlow models for inference with NVIDIA's TensorRT. By the end of this 1.5 hour long project, you will be able to optimize Tensorflow models using the TensorFlow integration of NVIDIA's TensorRT (TF-TRT), use TF-TRT to optimize several deep learning models at FP32, FP16, and INT8 precision, and observe how tuning TF-TRT parameters affects performance and inference throughput. Prerequisites: In order to successfully complete this project, you should be competent in Python programming, understand deep learning and what inference is, and have experience building deep learning models in TensorFlow and its Keras API. Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.

حول مقدم الدورة

Coursera provides access to more than 3000+ courses across a wide variety of subjects in parntership with different universities and organizations.

الطبع بواسطة

  • self
    التعلم الذاتي
  • dueration
    المدة 3 ساعات
  • domain
    الاختصاص تقنية المعلومات وعلوم الحاسب
  • subs
    Monthly Subscription Option not available
  • fee
    Buy Now مجاني
  • language
    اللغة الإنكليزية