Developing Generative AI Solutions on AWS

Course 1244

  • Duration: 2 days
  • Language: English
  • Level: Intermediate

This course is designed to introduce generative artificial intelligence (AI) to software developers interested in using large language models (LLMs) without fine-tuning.

The course provides an overview of generative AI, planning a generative AI project, getting started with Amazon Bedrock, the foundations of prompt engineering, and the architecture patterns to build generative AI applications using Amazon Bedrock and LangChain. 

Gen AI Solutions on AWS Training Delivery Methods

  • In-Person

  • Online

  • Upskill your whole team by bringing Private Team Training to your facility.

Gen AI Solutions on AWS Training Information

Training Prerequisites

Gen AI Solutions on AWS Training Outline

  • Overview of ML  
  • Basics of generative AI  
  • Generative AI use cases  
  • Generative AI in practice  
  • Risks and benefits
  • Generative AI fundamentals
  • Generative AI in practice
  • Generative AI context
  • Steps in planning a generative AI project
  • Risks and mitigation
  • Introduction to Amazon Bedrock
  • Architecture and use cases
  • How to use Amazon Bedrock
  • Demonstration Setting up Bedrock access and using playgrounds
  • Basics of foundation models
  • Fundamentals of Prompt Engineering
  • Basic prompt techniques
  • Advanced prompt techniques
  • Model-specific prompt techniques
  • Demonstration Finetuning a basic text prompt
  • Addressing prompt misuses
  • Mitigating bias
  • Demonstration Image bias mitigation
  • Overview of generative AI application components  
  • Foundation models and the FM interface  
  • Working with datasets and embeddings  
  • Demonstration Word embeddings  
  • Additional application components  
  • Retrieval Augmented Generation RAG  
  • Model finetuning  
  • Securing generative AI applications  
  • Generative AI application architecture
  • Introduction to Amazon Bedrock foundation models  
  • Using Amazon Bedrock FMs for inference  
  • Amazon Bedrock methods  
  • Data protection and auditability  
  • Demonstration Invoke Bedrock model for text generation using zeroshot prompt
  • Optimizing LLM performance  
  • Using models with LangChain  
  • Constructing prompts 
  • Demonstration Bedrock with LangChain using a prompt that includes context  
  • Structuring documents with indexes  
  • Storing and retrieving data with memory  
  • Using chains to sequence components  
  • Managing external resources with LangChain agents
  • Introduction to architecture patterns  
  • Text summarization  
  • Demonstration Text summarization of small files with Anthropic Claude  
  • Demonstration Abstractive text summarization with Amazon Titan using LangChain  
  • Question answering  
  • Demonstration Using Amazon Bedrock for question-answering  
  • Chatbot  
  • Demonstration Conversational interface Chatbot with AI21 LLM  
  • Code generation  
  • Demonstration Using Amazon Bedrock models for code generation  
  • LangChain and agents for Amazon Bedrock  
  • Demonstration Integrating Amazon Bedrock models with LangChain agents 

Need Help Finding The Right Training Solution?

Our training advisors are here for you.

Gen AI Solutions on AWS Training FAQs

A large language model (LLM) refers to a type of artificial intelligence model that has been trained on massive amounts of text data to understand and generate human-like language.

These models are typically built using deep learning architectures, such as recurrent neural networks (RNNs), transformer models, or variations thereof.

Not at this time.

Prompt engineering refers to the process of designing and crafting prompts or instructions for language models, particularly large language models like GPT (Generative Pre-trained Transformer) models.

Prompt engineering aims to guide the behavior and output of the model towards desired tasks or objectives.

Chat With Us