AI+ & ODSC

TRAINING CERTIFICATION

Get Certified in AI's Most Advanced Topic

ON DEMAND

NOW AVAIBLE
FREE WITH ANY AI+ Training PLANS
BEGIN
  • WORKSHOP 1 Introduction to Large Language Models

    You’ll develop a working understanding of how deep learning works over two modules. In Module 1 you will get key insights into 'The Unreasonable Effectiveness of Deep Learning'. In module 2 you will learn Essential Neural Network Theory
  • WORKSHOP 2 Prompt Engineering Fundamentals

    Prompt Engineering Fundamentals This workshop on Prompt Engineering explores the pivotal role of prompts in guiding Large Language Models (LLMs) like ChatGPT to generate desired responses. It emphasizes how prompts […]
  • WORKSHOP 3 Prompt Engineering with OpenAI

    Prompt Engineering with OpenAI   This workshop on prompt engineering with OpenAI discussed best practices for utilizing OpenAI models. We will review how to separate instructions and context using special […]
  • WORKSHOP 4 Build a Question & Answering Bot

    Build a Question & Answering Bot The workshop notebook delves into building a Question and Answering Bot based on a fixed knowledge base, covering the integration of concepts discussed in […]
  • WORKSHOP 5 Fine Tuning Embedding Models

    Fine Tuning Part I: Embedding Models This workshop explores the importance of fine-tuning Language and Embedding Models (LLMs). It highlights how embedding models are used to map natural language to […]
  • WORKSHOP 6 Fine Tuning an Existing LLM

     Fine Tuning an Existing LLM The workshop explores the process of fine-tuning Large Language Models (LLMs) for Natural Language Processing (NLP) tasks. It highlights the motivations for fine-tuning, such as […]
  • WORKSHOP 7 LangChain Agents

    LangChain Agents The “LangChain Agents” workshop delves into the “Agents” component of the LangChain library, offering a deeper understanding of how LangChain integrates Large Language Models (LLMs) with external systems […]
  • WORKSHOP 8 Parameter Efficient Fine tuning

    Parameter Efficient: Fine-tuning For the next workshop, our focus will be on parameter-efficient fine-tuning (PEFT) techniques in the field of machine learning, specifically within the context of large neural language […]
  • WORKSHOP 9 Retrieval-Augmented Generation (RAG)

    Retrieval-Augmented Generation (RAG) Retrieval-Augmented Generation (RAG) is a powerful natural language processing (NLP) architecture introduced in this workshop notebook. RAG combines retrieval and generation models, enhancing language understanding and generation […]
Certificate Award
WORKSHOP 1WORKSHOP 2WORKSHOP 3WORKSHOP 4WORKSHOP 5WORKSHOP 6WORKSHOP 7WORKSHOP 8WORKSHOP 9

How it Works

  • Enroll in full Generative AI and LLM Certificate course (free with the Ai+ Training Plans) and get access to all the course workshops.

  • Each workshops contain or or more code notebooks for hands-on experience

  • Each workshop consists of one or more tutorials to explain the core concepts and walk you though the code

  • Workshop exercises and checkpoints are included to test your knowledge outcomes.

  • Learn at your own pace. All the sessions are available on-demand

  • Complete all 9 workhos and recieve an ODSC Certificate in Generative AI and LLMs

FREE WITH ANY ODSC PASS

REGISTER NOW

Course Contents

Course : Introduction to Large Language Models

This hands-on course serves as a comprehensive introduction to Large Language Models (LLMs), covering a spectrum of topics from their differentiation from other language models to their underlying architecture and practical applications. It delves into the technical aspects, such as the transformer architecture and the attention mechanism, which are the cornerstones of modern language models.

By utilizing the code notebooks included in this course, participants can code alongside the code instructor to ensure hands-on practice experience in LLMs

Whats Covered

  • Introduction
  • Why Are LLMs So Powerful
  • The Transformer Architecture
  • The Application of LLMs 
  • Flow Chaining

Course 2: Introduction to Prompt Engineering

This workshop on Prompt Engineering explores the pivotal role of prompts in guiding Large Language Models (LLMs) like ChatGPT to generate desired responses. It emphasizes how prompts provide context, control output style and tone, aid in precise information retrieval, offer task-specific guidance, and ensure ethical AI usage. T

Whats Covered

  • Introduction to Prompt Engineering
  • Prompt Tuning as a Mechanism for Fine Tuning
  • Guardrails for Prompt Responses
  • Temperature as a Means for Model Control
  •  Memorization
  •  Tools For Prompt Engineering

 

Course 3: Prompting with OpenAI and Prompting Safety Guardrails

This workshop on prompt engineering with OpenAI discussed best practices for utilizing OpenAI models. The workshop also included code for installing the langchain library and demonstrated how to create prompts effectively, emphasizing the importance of clarity, specificity, and precision in prompts. Additionally, the workshop showed how to craft prompts for specific tasks, such as extracting entities from text.  Lastly, the workshop addressed the importance of using prompts as safety guardrails. It introduced prompts to mitigate hallucination and jailbreaking risks.

What’s Covered

  • Best Practices for Prompting OpenAI
  • Prompting Safety Guardrails

Course 4: Building a Q&A Bot with LLMs, Vector Search, and LangChain

The workshop notebook delves into building a Question and Answering Bot based on a fixed knowledge base, covering the integration of concepts discussed in earlier notebooks about LLMs (Large Language Models) and prompting. The notebook explains the steps involved in vector search including vector representation, indexing, querying, similarity measurement, and retrieval, detailing various technologies used for vector search such as vector libraries, vector databases, and vector plugins. Following this, the focus shifts to text generation where Langchain Chains are introduced. Chains, as described, allow for more complex applications by chaining several steps and models together into pipelines A RetrievalQA chain is used to build a Q&A Bot application which utilizes an OpenAI chat model for text generation.

What’s Covered

  • Building a Q&A Bot

  • Vector Search Technologies

  • LangChain Chains

Course 5: Fine-Tuning LLMs and Embedding Models

This workshop explores the importance of fine-tuning Language and Embedding Models (LLMs). It highlights how embedding models are used to map natural language to vectors, crucial for pipelines with multiple models to adapt to specific data nuances. An example demonstrates fine-tuning an embedding model for legal text. The notebook discusses existing solutions and hardware considerations, emphasizing GPU usage for large data.

What’s Covered

  • Fine Tuning Embedding Models

  • Fine Tuning a Large Language Model

Course 6: Fine Tuning an Existing LLM

 The workshop explores the process of fine-tuning Large Language Models (LLMs) for Natural Language Processing (NLP) tasks. It highlights the motivations for fine-tuning, such as task adaptation, transfer learning, and handling low-data scenarios, using a Yelp Review dataset. The notebook employs the HuggingFace Transformers library, including tokenization with AutoTokenizer, data subset selection, and model choice (BERT-based model). Hyperparameter tuning, evaluation strategy, and metrics are introduced. It also briefly mentions DeepSpeed for optimization and Parameter Efficient Fine-Tuning (PEFT) for resource-efficient fine-tuning, providing a comprehensive introduction to fine-tuning LLMs for NLP tasks.

What’s Covered

  • Fine Tuning a Large Language Model

Course 7: LangChain Agents

The “LangChain Agents” workshop delves into the “Agents” component of the LangChain library, offering a deeper understanding of how LangChain integrates Large Language Models (LLMs) with external systems and tools to execute actions. This workshop builds on the concept of “chains,” which can link multiple LLMs to tackle various tasks like classification, text generation, code generation, and more. “Agents” enable LLMs to interact with external systems and tools, making informed decisions based on available options. The workshop explores the different types of agents, such as “Zero-shot ReAct,” “Structured input ReAct,” “OpenAI Functions,” “Conversational,” “Self ask with search,” “ReAct document store,” and “Plan-and-execute agents.” It provides practical code examples, including initializing LLMs, defining tools, creating agents, and demonstrates how these agents can answer questions using external APIs, offering participants a comprehensive overview of LangChain’s agent capabilities.

What’s Covered

  • LangChain Agents

  • Chaining Multiple LLMs

  • Types of Agents

Course 8: Parameter Efficient Fine-tuning

For the next workshop, our focus will be on parameter-efficient fine-tuning (PEFT) techniques in the field of machine learning, specifically within the context of large neural language models like GPT or BERT. PEFT is a powerful approach that allows us to adapt these pre-trained models to specific tasks while minimizing additional parameter overhead. Instead of fine-tuning the entire massive model, PEFT introduces compact, task-specific parameters known as “adapters” into the pre-trained model’s architecture. These adapters enable the model to adapt to new tasks without significantly increasing its size. PEFT strikes a balance between model size and adaptability, making it a crucial technique for real-world applications where computational and memory resources are limited, while still maintaining competitive performance. In this workshop, we will delve into the different PEFT methods, such as additive, selective, re-parameterization, adapter-based, and soft prompt-based approaches 

What’s Covered

  • Parameter-efficient fine-tuning (PEFT) techiques

  • Additive, selective, re-parameterization, adapter-based PERF

Course 9: Retrieval-Augmented Generation (RAG)

Retrieval-Augmented Generation (RAG) is a powerful natural language processing (NLP) architecture introduced in this workshop notebook. RAG combines retrieval and generation models, enhancing language understanding and generation tasks. It consists of a retrieval component, which efficiently searches vast text databases for relevant information, and a generation component, often based on Transformer models, capable of producing coherent responses based on retrieved context. RAG’s versatility extends to various NLP applications, including question-answering and text summarization. Additionally, this notebook covers practical aspects such as indexing content, configuring RAG chains, and incorporating prompt engineering, offering a comprehensive introduction to harnessing RAG’s capabilities for NLP tasks.

What’s Covered

  • Retrieval-Augmented Generation (RAG)

PRICE

Pricing

Free with Our Premium Subscription

SUBSCRIBE to access all courses and certificate

Level up your ODSC experience:

Make the most of your conference! These free courses equip you with the latest prompt engineering techniques, ensuring you ask the right questions and get the most out of every AI interaction.

Register now & Save 50%
Open Data Science

Ai+ | ODSC
One Broadway, 14th Floor
Cambridge, MA 02142
admin_aiplus@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google