---
title: "Northeastern University: Foundations of Large Language Models"
slug: "northeastern-university-foundations-of-large-language-models"
author: "Jeremy Weaver"
date: "2025-01-27 16:36:43"
category: "Premium"
topics: "Pre-training Methods, Generative Model Architectures, Scaling and Context Length, Alignment Strategies, Prompting Techniques"
summary: "Summary: The content explores foundational methods and advanced techniques in large language model development, including pre-training, generative architectures like Transformers, scaling strategies, alignment through reinforcement learning and instruction fine-tuning, and various prompting methods."
banner: ""
thumbnail: ""
---
Northeastern University: Foundations of Large Language Models
Summary of Read Full Report
Detail foundational concepts and advanced techniques in large language model (LLM) development. It covers pre-training methods, including masked language modeling and discriminative training, and explores generative model architectures like Transformers.
The text also examines scaling LLMs for size and context length, along with alignment strategies such as reinforcement learning from human feedback (RLHF) and instruction fine-tuning.
Finally, it discusses prompting techniques, including chain-of-thought prompting and prompt optimization methods to improve LLM performance and alignment with human preferences.