Join Connor Shorten for the Weaviate Podcast Series.
Veja mais
Episódios Disponíveis
5 de 65
Ofir Press on AliBi and Self-Ask - Weaviate Podcast #65!
Hey everyone! Thank you so much for watching the Weaviate Podcast! I am SUPER excited to publish my conversation with Ofir Press! Ofir has done incredible work pioneering AliBi attention and Self-Ask prompting and I learned so much from speaking with him! As always we are more than happy to answer any questions or discuss any ideas you have about the content in the podcast!
+Huge Congratulations on your Ph.D. Ofir!
AliBi Attention: https://arxiv.org/abs/2108.12409
Self-Ask Prompting: https://arxiv.org/abs/2210.03350
Ofir Pres on YouTube: https://www.youtube.com/@ofirpress
Chapters
0:00 Welcome Ofir Press
0:41 Large Context LLMs
12:38 Quadratic Complexity of Attention
19:12 AliBi Attention, Visual Demo!
24:53 Recency Bias in LLMs
28:57 RAG in Long Context LLM Training
36:27 Self-Ask Prompting
46:07 Chain-of-Thought and Self-Ask
50:47 Gorilla LLMs
58:42 New Directions for New Training Data
31/08/2023
1:07:11
Shishir Patil and Tianjun Zhang on Gorilla - Weaviate Podcast #64!
Hey everyone! Thank you so much for watching the 64th Weaviate Podcast with Shishir Patil and Tianjun Zhang, co-authors of Gorilla: Large Language Models Connected with Massive APIs! I learned so much about Gorilla from Shishir and Tianjun, from the APIBench dataset to the continually evolving APIZoo, how the models are trained with Retrieval-Aware Training, Self-Instruct Training data and how the authors think of fine-tuning LLaMA-7B models for tasks such as this, and many more! I hope you enjoy the podcast! As always I am more than happy to answer any questions or discuss any ideas you have about the content in the podcast!
Please check out the paper here! https://arxiv.org/abs/2305.15334
Chapters
0:00 Welcome Shishir and Tianjun
0:25 Gorilla LLM Story
1:50 API Examples
7:40 The APIZoo
10:55 Gorilla vs. OpenAI Funcs
12:50 Retrieval-Aware Training
19:55 Mixing APIs, Gorilla for Integration
25:12 LlaMA-7B Fine-Tuning vs. GPT-4
29:08 Weaviate Gorilla
33:52 Gorilla and Baby Gorillas
35:40 Gorilla vs. HuggingFace
38:32 Structured Output Parsing
41:14 Reflexion Prompting for Debugging
44:00 Directions for the Future
30/08/2023
49:15
Nils Reimers on Cohere Search AI - Weaviate Podcast #63!
Hey everyone! Thank you so much for watching the 63rd Weaviate Podcast, I couldn't be more excited to welcome Nils Reimers back to the podcast!! Similar to our debut episode together, we began by describing the latest collaboration of Weaviate and Cohere (episode 1, new multilingual embedding models; episode 2, rerankers!), and then continued into some of the key questions around search technology. In this one, we discussed the importance of temporal queries and metadata extraction, long document representation, and future directions for Retrieval-Augmented Generation! I hope you enjoy the podcast, as always I am more than happy to answer any questions or discuss any ideas you have about the content in the podcast! Thank you so much for watching!
Learn more about Cohere Rerankers and how to use it in Weaviate here: https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules/reranker-cohere
Chapters
0:00 Introduction
1:30 Cohere Rerankers
7:02 Dataset Curation at Cohere
10:30 New Rerankers and XGBoost
14:35 Temporal Queries
17:55 Metadata Extraction from Unstructured Text Chunks
21:52 Soft Filters
24:58 Chunking and Long Document Representation
38:00 Retrieval-Augmented Generation
45:40 Retrieval-Aware Training to solve Hallucinations
49:50 Learning to Search and End-to-End RAG
54:35 RETRO
59:25 Foundation Model for Search
17/08/2023
1:05:10
Atai Barkai on PodcastGPT - Weaviate Podcast #62!
Hey everyone! Thank you so much for watching the 62nd Weaviate Podcast with Atai Barkai! We are stepping into the meta with this one for a podcast about podcasts! Podcasts are one of the biggest opportunities of new technologies, starting with Whisper's ability to transcribe audio to text and advances with speaker diarization, .. the question to be explored is, What Vector Database and LLM applications can we build with this data?! What is the future of podcasting with these new technologies?! I had so much fun discussing all these ideas with Atai! As always we are more than happy to answer any questions or discuss any ideas you have about content discussed in the podcast! Thank you so much for watching!
Chapters
0:00 Welcome Atai!
1:04 TawkitAI and PodcastGPT!
2:20 Chat with Podcast
PodcastGPT - https://www.podcastgpt.ai/
Tawkit AI - https://twitter.com/tawkitapp
Weaviate Podcast Search Demo!
https://github.com/weaviate/weaviate-podcast-search
09/08/2023
55:40
Rohit Agarwal on Portkey - Weaviate Podcast #61!
Hey everyone! Thank you so much for watching the 61st episode of the Weaviate Podcast! I am beyond excited to publish this one! I first met Rohit at the Cal Hacks event hosted by UC Berkeley where we had a debate about the impact of Semantic Caching! Rohit taught me a ton about the topic and I think it's going to be one of the most impactful early applications of Generative Feedback Loops! Rohit is building Portkey, a SUPER interesting LLM middleware that does things like load balancing between LLM APIs, and as discussed in the podcast there are all sorts of opportunities for this kind of space whether it be routing to tool-specific LLMs, different cost / accuracy requirements, or multiple models in the HuggingGPT sense. It was amazing chatting with Rohit, this was the best dive into LLMOps I have personally been apart of! As always we are more than happy to answer any questions or discuss any ideas you have about the content in the podcast!
Check out portkey here! https://portkey.ai/blog
Chapters
0:00 Introduction
0:24 Portkey, Founding Vision
2:20 LLMOps vs. MLOps
4:00 Inference Hosting Options
7:05 3 Layers of LLM Use
8:35 LLM Load Balancers
12:45 Fine-Tuning LLMs
17:08 Retrieval-Aware Tuning
21:16 Portkey Cost Savings
23:08 HuggingGPT
26:28 Semantic Caching
32:40 Frequently Asked Questions
34:00 Embeddings vs. Generative Tasks
35:30 AI Moats, GPT Wrappers
39:56 Unlocks from Cheaper LLM Inference