Trelis Research

@TrelisResearch - 171 本の動画

チャンネル登録者数 1.85万人

Trelis LTD is a research company based in Dublin, founded by Ronan McGovern 📚Watch Large Language, Vision and Audio Model Tutorials 🛠 Explore Fine-tuning, In...

最近の動画

Quick Start   Log OpenAI traces to Postgres 1:59

Quick Start Log OpenAI traces to Postgres

A Simple Postgres Logger for OpenAI Endpoints 0:52

A Simple Postgres Logger for OpenAI Endpoints

A Simple Postgres Logger for OpenAI Endpoints - Open Source 18:39

A Simple Postgres Logger for OpenAI Endpoints - Open Source

Intro to TPU vs GPU 0:34

Intro to TPU vs GPU

Unsloth or Transformers for fine tuning 2:58

Unsloth or Transformers for fine tuning

TPU vs GPU 33:45

TPU vs GPU

LLM Fine tuning Tips 1:27

LLM Fine tuning Tips

What Open Weights Model should you Fine tune 1:56

What Open Weights Model should you Fine tune

How to Prepare Data for Fine tuning 1:46

How to Prepare Data for Fine tuning

Why fine tune 1:47

Why fine tune

Fine tune Gemma 3, Qwen3, Llama 4, Phi 4 and Mistral Small with Unsloth and Transformers 1:21:12

Fine tune Gemma 3, Qwen3, Llama 4, Phi 4 and Mistral Small with Unsloth and Transformers

Qwen3 Inference and MCP Agents 38:14

Qwen3 Inference and MCP Agents

Qwen3 Inference and MCP Agents (Intro) 0:30

Qwen3 Inference and MCP Agents (Intro)

Approaches towards Solving ARC AGI 0:46

Approaches towards Solving ARC AGI

ARC Prize: A Guide to DSL, LLM-Guided & Test-time Training Approaches 2:02:00

ARC Prize: A Guide to DSL, LLM-Guided & Test-time Training Approaches

Advanced Data Prep and Visualisation for Fine tuning LLMs 0:34

Advanced Data Prep and Visualisation for Fine tuning LLMs

Advanced Data Prep and Visualisation Techniques for Fine-tuning LLMs 55:32

Advanced Data Prep and Visualisation Techniques for Fine-tuning LLMs

A Code Sandbox for Agents 0:42

A Code Sandbox for Agents

Create a Python Sandbox for Agents to Run Code 15:49

Create a Python Sandbox for Agents to Run Code

Trelis Research Channel Updates - April 2025 0:25

Trelis Research Channel Updates - April 2025

April 2025 Channel Update - Repos, Grants, Collabs, and ARC AGI Team 10:06

April 2025 Channel Update - Repos, Grants, Collabs, and ARC AGI Team

Create Custom Benchmarks with YourBench and LightEval 0:33

Create Custom Benchmarks with YourBench and LightEval

Memory in LLMs is underappreciated 0:20

Memory in LLMs is underappreciated

What LLM should I use for my application? 0:37

What LLM should I use for my application?

Build Custom LLM Benchmarks for your Application 46:46

Build Custom LLM Benchmarks for your Application

Two ways data can leak in Cursor or Windsurf 0:42

Two ways data can leak in Cursor or Windsurf

Data Security Risks using Cursor and Windsurf 0:43

Data Security Risks using Cursor and Windsurf

Is it safe to use Cursor or Windsurf? 18:08

Is it safe to use Cursor or Windsurf?

Why building MCP Servers is Tricky! 0:43

Why building MCP Servers is Tricky!

How to Build and Publish an MCP Server 0:40

How to Build and Publish an MCP Server

How to Build and Publish an MCP Server - A Detailed Guide 55:43

How to Build and Publish an MCP Server - A Detailed Guide

What should you use Qwen 2.5 Omni for? 0:40

What should you use Qwen 2.5 Omni for?

Qwen 2.5 Omni 0:28

Qwen 2.5 Omni

Qwen 2.5 Omni - The Most Multi-modal 30:05

Qwen 2.5 Omni - The Most Multi-modal

How to use MCP with Cursor? 2:01

How to use MCP with Cursor?

How to inspect an MCP server? 2:34

How to inspect an MCP server?

How does MCP work? How to use MCP? 55:39

How does MCP work? How to use MCP?

How does MCP work? How to use MCP? 0:52

How does MCP work? How to use MCP?

Price Comparison: OpenAI vs ElevenLabs vs DeepGram for TTS and STT 15:30

Price Comparison: OpenAI vs ElevenLabs vs DeepGram for TTS and STT

Voice Cloning with Sesame CSM-1B 0:22

Voice Cloning with Sesame CSM-1B

Fine-tune Text to Speech Models in 2025: CSM-1B and Orpheus TTS 2:23:10

Fine-tune Text to Speech Models in 2025: CSM-1B and Orpheus TTS

Diarization, Voice and Turn Detection 2:23:10

Diarization, Voice and Turn Detection

Gemma 3 21:43

Gemma 3

Create and Fine-tune AI Avatar Videos 1:02:57

Create and Fine-tune AI Avatar Videos

Document-Reading Agents with Read-Write Memory 42:55

Document-Reading Agents with Read-Write Memory

Why use Keyword vs Vector Search? 30:20

Why use Keyword vs Vector Search?

Memory Management in LLMs and Agents 48:46

Memory Management in LLMs and Agents

Train an LLM to Self-Correct with Verifiable Backtracking 22:58

Train an LLM to Self-Correct with Verifiable Backtracking

SFT vs GRPO 55:15

SFT vs GRPO

How does GRPO work? 32:44

How does GRPO work?

Reinforcement Learning for LLMs in 2025 1:18:19

Reinforcement Learning for LLMs in 2025

The Best LLM? Google vs OpenAI, Anthropic and DeepSeek 11:27

The Best LLM? Google vs OpenAI, Anthropic and DeepSeek

Top Vision Models 2025: Qwen 2.5 VL, Moondream, & SmolVLM (Fine-Tuning & Benchmarks) 1:11:20

Top Vision Models 2025: Qwen 2.5 VL, Moondream, & SmolVLM (Fine-Tuning & Benchmarks)

Run Distilled DeepSeek v3 Reasoning Models on Any Laptop with LMStudio 15:56

Run Distilled DeepSeek v3 Reasoning Models on Any Laptop with LMStudio

How to Boost RAG Accuracy with SmolAgents & BM25 54:16

How to Boost RAG Accuracy with SmolAgents & BM25

OpenAI’s $500B ‘Stargate’ Project, GPU Export Bans, Musk’s Critique, and DeepSeek’s Reasoning Model 14:08

OpenAI’s $500B ‘Stargate’ Project, GPU Export Bans, Musk’s Critique, and DeepSeek’s Reasoning Model

Channel Update  - Playlists, Repos, Collabs, Grants, Memberships 17:07

Channel Update - Playlists, Repos, Collabs, Grants, Memberships

Advanced Embedding Models and Techniques for RAG 49:45

Advanced Embedding Models and Techniques for RAG

Reasoning Models and Chinese Models 33:21

Reasoning Models and Chinese Models

LiteLLM - One Unified API for for all LLMs 17:37

LiteLLM - One Unified API for for all LLMs

Nvidia RTX 5090 vs 4090, Project Digits & GB NVLink 72 at CES 2025 12:19

Nvidia RTX 5090 vs 4090, Project Digits & GB NVLink 72 at CES 2025

LLM Evals - Part 2: Improving Performance 34:47

LLM Evals - Part 2: Improving Performance

How Deepseek v3 made Compute and Export Controls Less Relevant 1:01:57

How Deepseek v3 made Compute and Export Controls Less Relevant

LLM Evals - Part 1: Evaluating Performance 34:23

LLM Evals - Part 1: Evaluating Performance

I Tested Every GPU 46:34

I Tested Every GPU

Serve Multiple LoRA Adapters on a Single GPU 57:02

Serve Multiple LoRA Adapters on a Single GPU

Why Build Enterprise RAG with Postgres? 1:17:56

Why Build Enterprise RAG with Postgres?

Multi modal Audio + Text Fine tuning and Inference with Qwen 56:31

Multi modal Audio + Text Fine tuning and Inference with Qwen

How to Build an Inference Service 1:00:38

How to Build an Inference Service

How to Fine-tune Florence 2: The Best Small Vision Model 50:36

How to Fine-tune Florence 2: The Best Small Vision Model

Output Predictions - Faster Inference with OpenAI or vLLM 24:23

Output Predictions - Faster Inference with OpenAI or vLLM

Coding Assistant for Jupyter Lab 3:59

Coding Assistant for Jupyter Lab

Predicting Events with Large Language Models 25:09

Predicting Events with Large Language Models

Fine tune and Serve Faster Whisper Turbo 34:44

Fine tune and Serve Faster Whisper Turbo

OpenAI Fine-tuning vs Distillation - Free Colab Notebook 22:37

OpenAI Fine-tuning vs Distillation - Free Colab Notebook

Synthetic Data Generation and Fine tuning (OpenAI GPT4o or Llama 3) 1:17:59

Synthetic Data Generation and Fine tuning (OpenAI GPT4o or Llama 3)

Test Time Compute, Part 2: Verifiers 1:00:00

Test Time Compute, Part 2: Verifiers

Test Time Compute, Part 1: Sampling and Chain of Thought 54:59

Test Time Compute, Part 1: Sampling and Chain of Thought

Distillation of Transformer Models 1:20:38

Distillation of Transformer Models

Fine tuning Pixtral - Multi-modal Vision and Text Model 55:22

Fine tuning Pixtral - Multi-modal Vision and Text Model

Powering Gigawatt Data Centers 39:40

Powering Gigawatt Data Centers

Full Fine tuning with Fewer GPUs - Galore, Optimizer Tricks, Adafactor 1:03:42

Full Fine tuning with Fewer GPUs - Galore, Optimizer Tricks, Adafactor

Make Cursor Understand Folder Structure - Coding with LLMs 3:53

Make Cursor Understand Folder Structure - Coding with LLMs

Automated Prompt Engineering with DSPy 45:52

Automated Prompt Engineering with DSPy

Fine Tune Flux Diffusion Models with Your Photos 51:57

Fine Tune Flux Diffusion Models with Your Photos

How to use LLMs for Fact Checking 40:40

How to use LLMs for Fact Checking

CONTEXT CACHING for Faster and Cheaper Inference 35:26

CONTEXT CACHING for Faster and Cheaper Inference

Run Speech-to-Speech Models on Mac or GPU 37:03

Run Speech-to-Speech Models on Mac or GPU

LLM Security 101: Jailbreaks, Prompt Injection Attacks, and Building Guards 1:27:15

LLM Security 101: Jailbreaks, Prompt Injection Attacks, and Building Guards

Create an AI Assistant or Endpoint for your Documents 2:56

Create an AI Assistant or Endpoint for your Documents

RAG - but with Verified Citations! 33:30

RAG - but with Verified Citations!

How to pick a GPU and Inference Engine? 1:04:22

How to pick a GPU and Inference Engine?

LLM Tool Use - GPT4o-mini, Groq & Llama.cpp 1:19:44

LLM Tool Use - GPT4o-mini, Groq & Llama.cpp

Text to Speech Fine-tuning Tutorial 1:15:44

Text to Speech Fine-tuning Tutorial

Mastering Retrieval for LLMs - BM25, Fine-tuned Embeddings, and Re-Rankers 1:50:09

Mastering Retrieval for LLMs - BM25, Fine-tuned Embeddings, and Re-Rankers

Improving LLM accuracy with Monte Carlo Tree Search 33:16

Improving LLM accuracy with Monte Carlo Tree Search

My TOP TEN TIPS for Fine-tuning 24:58

My TOP TEN TIPS for Fine-tuning

Preparing Fineweb - A Finely Cleaned Common Crawl Dataset 36:31

Preparing Fineweb - A Finely Cleaned Common Crawl Dataset

Data Preparation Tips and Tricks 59:37

Data Preparation Tips and Tricks

Anonymizing Sensitive Data in LLM Prompts 35:45

Anonymizing Sensitive Data in LLM Prompts

動画

A Simple Postgres Logger for OpenAI Endpoints - Open Source 18:39

A Simple Postgres Logger for OpenAI Endpoints - Open Source

611 回視聴 - 3 日前

TPU vs GPU 33:45

TPU vs GPU

2017 回視聴 - 7 日前

Fine tune Gemma 3, Qwen3, Llama 4, Phi 4 and Mistral Small with Unsloth and Transformers 1:21:12

Fine tune Gemma 3, Qwen3, Llama 4, Phi 4 and Mistral Small with Unsloth and Transformers

3479 回視聴 - 11 日前

Qwen3 Inference and MCP Agents 38:14

Qwen3 Inference and MCP Agents

1983 回視聴 - 2 週間前

ARC Prize: A Guide to DSL, LLM-Guided & Test-time Training Approaches 2:02:00

ARC Prize: A Guide to DSL, LLM-Guided & Test-time Training Approaches

1400 回視聴 - 2 週間前

Advanced Data Prep and Visualisation Techniques for Fine-tuning LLMs 55:32

Advanced Data Prep and Visualisation Techniques for Fine-tuning LLMs

2244 回視聴 - 3 週間前

Create a Python Sandbox for Agents to Run Code 15:49

Create a Python Sandbox for Agents to Run Code

1476 回視聴 - 3 週間前

April 2025 Channel Update - Repos, Grants, Collabs, and ARC AGI Team 10:06

April 2025 Channel Update - Repos, Grants, Collabs, and ARC AGI Team

812 回視聴 - 1 か月前

Build Custom LLM Benchmarks for your Application 46:46

Build Custom LLM Benchmarks for your Application

1114 回視聴 - 1 か月前

Is it safe to use Cursor or Windsurf? 18:08

Is it safe to use Cursor or Windsurf?

1702 回視聴 - 1 か月前

How to Build and Publish an MCP Server - A Detailed Guide 55:43

How to Build and Publish an MCP Server - A Detailed Guide

4089 回視聴 - 1 か月前

Qwen 2.5 Omni - The Most Multi-modal 30:05

Qwen 2.5 Omni - The Most Multi-modal

4035 回視聴 - 1 か月前