Price: $147
Regular price $210, discounted 10%
4 hour immersive session
Hands-on training with Q&A
Recording available on-demand
Certification of Completion
30% Discount Ends in:
Subscribe and get an additional 10% to 35% off ALL live training session
Course Outline
What’s the plan?Â
Matt Harrison has been working with Python and data since 2000. He has a computer science degree from Stanford. He has worked at many amazing companies, created cool products, wrote a couple books, and taught thousands Python and Data Science. Currently, he is working as a corporate trainer, author, and consultant through his company Metasnake, which provides consulting and teaches corporations how to be effective with Python and data science.

LIVE TRAINING: Introduction to PYTHON for ProgrammingÂ
October 12th @12 PM EST
Upcoming Live Training
XXth
XXX
XXX.
Course Overview
Python is a powerful tools that XXX
LIVE DEEP LEARNING CERTIFICATION
LIVE TRAINING CERTIFICATION:
Deep Learning
With Dr. Jon Krohn
Gradient Boosting Series
4-Courses Program
Get Certified in Gradient Boosting in ONLY 4 courses
-
CLASS 1 Course 1: Machine Learning in Business and Industry
When hearing the term “machine learning”, many people think immediately of complex, artificially intelligent prediction systems. While these certainly exist, machine learning can also be used for data analysis – […] -
CLASS 2 Course 2: Fundamentals of Gradient Boosting
Abstract: In this course, we start from a single decision tree and progress to random forests and gradient boosted tree models. We cover the various hyperparameters involved in gradient boosted […] -
CLASS 3 Course 3: Building Gradient Boosting Models
Abstract: This course focuses on how to approach model building when using modern machine learning models in general, and specifically gradient boosting. Guidelines that made sense when building a linear […] -
Course 4: Understanding a Model
Abstract: You’ve built a model! But what is it actually doing? Do its judgments make sense? How do I explain it to someone else? Why did it make this particular […]
How It Works
Enroll in full Gradient Boosting Series program or choose one of the 4 courses (free with the Ai+ Training Plans)
Each course includes exercises to improve learning outcomes.
Coding demos allow you to learn hands-on skills.
Each course includes exercises to improve learning outcomes.
Learn at your own pace. All the sessions are available on-demand
Become certified in Gradient Boosting
Meet Your Instructor
Brian Lucena
Brian Lucena is Principal at Numeristical and the creator of StructureBoost, ML-Insights, and SplineCalib. His mission is to enhance the understanding and application of modern machine learning and statistical techniques. He does this through academic research, open-source software development, and educational content such as live stream classes and interactive Jupyter notebooks. Additionally, he consults for organizations of all sizes from small startups to large public enterprises. In previous roles, he has served as SVP of Analytics at PCCI, Principal Data Scientist at Clover Health, and Chief Mathematician at Guardian Analytics. He has taught at numerous institutions including UC-Berkeley, Brown, USF, and the Metis Data Science Bootcamp.
What you will learn
By the end of this 4-part live, hands-on, online course, you’ll understand in detail how Gradient Boosting models are fit as an ensemble of decision trees and apply that understanding to the feature engineering process, the various parameters of Gradient Boosting and their relative importance and how to appropriately choose them and gain familiarity with the various Gradient Boosting packages and the capabilities, strengths, and weaknesses of each. You will also learn how to interpret, understand, and evaluate a model: both qualitatively and quantitatively.
Part 1: Live Training: Feb 2nd, 2022
Course 1 : June 28th
REGISTER NOWMachine Learning in Business and Industry
-
-
-
-
-
-
-
-
-
-
-
- Why build a Model? Prediction vs Analysis
- Classification vs Regression
- Going further: Calibration and Uncertainty Quantification
- Metrics for Regression and Classification
- Beyond Metrics: “Real-world” Model Evaluation
-
-
-
-
-
-
-
-
-
-
How Deep Learning Works
Module 1:
The Unreasonable Effectiveness of Deep Learning
Module 2:
Essential Neural Network Theory
- A Brief History of the Rise of Deep Learning
- Deep Learning vs Other Machine Learning Approaches
- Dense Feedforward Networks
- Convolutional Networks for Machine Vision
- Recurrent Networks for Natural Language Processing and Time-Series Predictions
- Generative Adversarial Networks for Artistic Creativity
- Deep Reinforcement Learning for Sequential Decision-Making
- An Artificial Neural Network in TensorFlow 2
- The Essential Math of Artificial Neurons
- The Essential Math of Neural Networks
- Activation Functions
- Cost Functions, including Cross-Entropy
Part 2: Live Training: Feb 16th, 2022
REGISTER NOWCourse 2 : July 12th
REGISTER NOWFundamentals of Gradient Boosting
-
-
-
-
-
-
-
-
-
-
-
-
- Decision Trees
- Random Forests
- Gradient Boosted Trees
- Details of Hyperparameters
-
-
-
-
-
-
-
-
-
-
-
Building and Training a Deep Learning Network
Module 1:
Essential Deep Learning Theory
Module 2:
Deep Learning with Keras, TensorFlow’s High-Level API
- Stochastic Gradient Descent
- Backpropagation
- Mini-Batches
- Learning Rate
- Fancy Optimizers (e.g., Adam, Nadam)
- Glorot/He Weight Initialization
- Dense Layers
- Softmax Layers
- Dropout
- Data Augmentation
- TensorFlow Playground: Visualizing a Deep Net in Action
- Revisiting our Shallow Net
- A Deep Neural Net
- Tuning Model Hyperparameters
Part 3: Live Training: March 2nd, 2022
REGISTER NOWCourse 3 : July 26th
REGISTER NOWBuilding Gradient Boosting Models
-
-
-
-
-
-
-
-
-
-
-
-
- Details of Hyperparameters
- The train/test paradigm
- Iterating and Improving your Model
- Early Stopping / Hyperparameter Optimization
- Failure Analysis and Feature Engineering
-
-
-
-
-
-
-
-
-
-
-
Machine Vision and Creativity
Module 1:
Introducing Deep Learning for Machine Vision
Module 2:
Convolutional Neural Networks in Practice with Keras
Module 3:
Generative Adversarial Networks
- Machine Vision Applications
- Review of Relevant Fundamental Deep Learning Theory
- Essential Theory of Convolutional Neural Networks
- Classic Model Architectures: LeNet-5, AlexNet & VGGNet
- Residual Networks (ResNet)Â
- U-Net
- Image Classification
- Object DetectionÂ
- Semantic Image SegmentationÂ
- Transfer Learning
- How GANs were Born
- Applications of GANs
- Essential GAN Theory
- A Cartoon-Drawing GAN in Keras
Part 4: Live Training: March 16th, 2022
REGISTER NOWCourse 4 : August 3rd
REGISTER NOWUnderstanding a Model
-
-
-
-
-
-
-
-
-
-
-
-
- Why should I trust a model?
- How does my model generally work?
- Why is my model making this particular prediction?
- Can I trust the probability given by my model?
- Inferring causality from models
- Case analysis vs. population level metrics
-
-
-
-
-
-
-
-
-
-
-
Natural Language Processing
Module 1:
The Power and Elegance of Deep Learning for NLP
Module 2:
Modeling Natural Language Data
Module 3:
Recurrent and Advanced Neural Networks
- Introduction to Deep Learning for Natural Language ProcessingÂ
- Easy, Intermediate, and Complex NLP Applications
- Review of Relevant Fundamental Deep Learning Theory
- Word Vectors: Representing Language as Embeddings
- Word Vector Arithmetic
- An Interactive Visualization of Vector-Space Embeddings
- Vector-Based Representations vs One-Hot Encodings
- Best Practices for Preprocessing Natural Language Data
- Using word2vec to Create Word Vectors
- Document Classification with a Dense Neural NetworkÂ
- Document Classification with a Convolutional Neural Network
- Recurrent Neural Networks (RNNs)
- Long Short-Term Memory Units (LSTMs)
- Gated Recurrent Units (GRUs)
- Bi-Directional LSTMsÂ
- Stacked LSTMsÂ
- Parallel Network ArchitecturesÂ
- Transformers: BERT, ELMo & Friends
- Financial Time Series Applications
Part 5: Live Training: March 30th, 2022
REGISTER NOWDeep Reinforcement Learning and A.I.
Module 1:
The Foundations of Artificial Intelligence
Module 2:
Deep Q-Learning Networks
Module 3:
Advanced Agents
- The Contemporary State of A.I.Â
- Artificial General Intelligence
- Applications of Deep Reinforcement Learning
- The Cartpole GameÂ
- Essential Deep Reinforcement Learning TheoryÂ
- Defining a DQN Agent
- Interacting with an OpenAI Gym Environment
- SLM-Lab for Agent Experimentation and OptimizationÂ
- Policy Gradients
- REINFORCE
- The Actor-Critic Algorithm
PyTorch and Beyond
Module 1:
Deep Learning with PyTorch
Module 2:
Final Topics
- Overview of the Leading Deep Learning Libraries
- Detailed Comparison of TensorFlow 2 and PyTorch
- A Shallow Neural Network in PyTorch
- Deep Neural Networks in PyTorch
- Software 2.0Â
- Approaching Artificial General Intelligence
- Creating Your Own Deep Learning Project
- What to Study Next, Depending on Your Interests
- Jeanne Calment and Your Role in the A.I. Revolution
4-Courses Gradient Boosting Series
PRICE
Key Details
DATE
DURATION:
LEVEL:
Starting June 28th
3-hour each class
BEGINNER
Prerequisites
This course is geared to data scientists of all levels who wish to gain a deep understanding of Gradient Boosting and how to apply it to real-world situations. The ideal participant will have some experience with building models, how the Python data science toolkit (numpy, pandas, scikit-learn, matplotlib) and have experiencde fitting models on training sets, making predictions on test sets, and evaluating the quality of the model with metrics.
Â