PodcastsEnsinoOracle University Podcast

Oracle University Podcast

Oracle Corporation
Oracle University Podcast
Último episódio

163 episódios

  • Oracle University Podcast

    Vector AI Supporting Features: What's New in Oracle Exadata and GoldenGate

    22/04/2026 | 13min
    Hosts Lois Houston and Nikita Abraham are joined by Brent Dayley, Senior Principal APEX and Apps Dev Instructor, to explore the latest vector AI supporting features in Oracle Exadata and GoldenGate 23ai. The conversation begins with an overview of Exadata's capabilities and then shifts to how GoldenGate is powering distributed AI, real-time data streaming, and analytics with advanced microservices architecture. Brent highlights recent GoldenGate enhancements, including distributed vector support, robust monitoring, OCI IAM integration, and support for next-generation AI workloads via real-time vector hubs.
     
    Oracle AI Vector Search Deep Dive: https://mylearn.oracle.com/ou/course/oracle-ai-vector-search-deep-dive/144706/
    Oracle University Learning Community: https://education.oracle.com/ou-community
    LinkedIn: https://www.linkedin.com/showcase/oracle-university/
    X: https://x.com/Oracle_Edu
     
    Special thanks to Arijit Ghosh, Anna Hulkower, and the OU Studio Team for helping us create this episode.
     
    Please note, this episode was recorded before Oracle AI Database 26ai replaced Oracle Database 23ai. However, all concepts and features discussed remain fully relevant to the latest release.
     
    -------------------------------------------------------
     
    Episode Transcript:
     
    00:00
    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!
    00:26
    Lois: Hello and welcome to another episode of the Oracle University Podcast! I'm Lois Houston, Director of Communications and Adoption Programs with Customer Success Services, and with me is Nikita Abraham, Team Lead of Editorial Services with Oracle University. 
    Nikita: Hi everyone! Thanks for joining us! In our previous episode of this series, we took a deep dive into Oracle AI Vector Search and Retrieval Augmented Generation, or RAG, showing how unstructured data can be transformed into embeddings to power smarter, more context-aware AI with Oracle Database 23ai.
    Lois: That's right, Niki. We also explored how the OCI Generative AI service can be used with both Python and PL/SQL, and how AI Vector Search enables relevant information retrieval for large language model prompts.
    01:21
    Nikita: Today, we're focusing on the latest supporting features for Oracle AI Vector Search. Joining us once again is Brent Dayley, Senior Principal APEX and Apps Dev Instructor. Welcome back, Brent! To kick things off, could you outline what's new in Exadata with the 24ai release, particularly for AI storage?
    Brent: So Exadata has ushered in a new era of AI capabilities with 24ai release. Key features of Exadata system software 24ai include AI Smart Scan, Exadata RDMA Memory, known as XRMEM, Exadata Smart Flash Cache, and on-storage processing. 
    In-Memory Columnar Speed JSON Queries, Transparent Cross-Tier Scans, and caching enhancements, including Columnar Smart Scan at Memory Speed, Exadata Cache Observability, and Automatic KEEP Object Load into Exadata Flash Cache. 
    Now, Exadata system software 24ai is a significant release. It ushers in a new era of AI capabilities for Oracle Database users. 
    Now there have been some infrastructure improvements, including the ability to increase the number of virtual machines on X10M and Secure Boot for KVM Virtual Machines. 
    We have also improved and enhanced high availability and network resilience, including improved RoCE Network Resilience and enhanced RoCE Network Discovery. There have been some enhancements for monitoring and management, including AWR and SQL Monitor Enhancements and JSON API for Management Server. 
    Additionally, security enhancement. SNMP Security. Now, Exadata system software 24ai is supported on Exadata database machines and storage expansion racks from X6 and newer. 
    03:40
    Lois: Those are some fantastic advancements for Exadata users. Now, let's pivot to distributed AI. Brent, can you walk us through how GoldenGate enables distributed AI?
    Brent: Let's take a look at some common GoldenGate use cases as a refresher. The first use case is multi-active, high availability, and cross-region deployments, spanning on-premises and cloud environments. 
    Another use case includes data offloading and data hub creation in order to support multiple downstream applications. Real-time data stores for Downstream Marts and Analytics. Micro and mini services architecture and an audit history of transactions. 
    Other use cases include migrations and upgrades of databases, including OCI-hosted databases. Another use case would be creating analytic data feeds for various applications, including SaaS and on-premises apps. And finally, stream analytics using application and transaction events captured by GoldenGate Stream Analytics. 
    05:03
    Nikita: We know GoldenGate has long been a staple for enterprise data integration. So Brent, what makes GoldenGate the best choice today, and how has its architecture evolved?
    Brent: It offers DIY Stream Analytics. GoldenGate does remain the top choice for Enterprise Standard, real-time data streaming. It supports Oracle and third-party databases, vector sources, messaging systems, and NoSQL databases. 
    OCI offers a fully managed pipeline builder for Stream Analytics. This pipeline leverages various OCI services, such as OCI Streaming for real-time event ingestion, OCI Dataflow for stream processing, OCI Big Data for data storage and processing, and OCI Stream Analytics for real-time event processing and analysis. 
    GoldenGate microservices, available since 2017 in Oracle GoldenGate 12.3, is used in over 4,000 deployments in OCI. Benefits of GoldenGate microservices include the ability to employ the same trusted Extract and Replicat processes as the classic architecture. 
    Provides flexible and secure remote administration through a user-friendly web interface or CLI. Deployable on-premises in OCI as a service and in third-party cloud environments. Simplified patching and upgrading process. 
    Now the GoldenGate architecture evolution. First, classic architecture that was deprecated in version 19c and desupported in 23ai. Microservices Architecture introduced in version 12.3 and is the recommended architecture. A migration utility is available to upgrade from classic to microservices architecture. 
    07:12
    Are you ready to create and manage AI Agents in Fusion Applications? Check out the Oracle AI Agent Studio for Fusion Applications courses! Start with the Foundations course to build, customize, and deploy AI Agents, and then advance to the Developer Professional certification. Explore hands-on labs and real-world case studies. Visit mylearn.oracle.com for all the details. 
    07:39
    Nikita: Welcome back! It sounds like the latest GoldenGate updates offer new features and integrations. Could you share more about these enhancements?
    Brent: There are many new features and enhancements in GoldenGate, along with microservices, including a redesigned GUI for enhanced usability. Integration with StatsD and Telegraf for monitoring and metrics. OCI IAM integration for secure access control. 
    JSON Relational Duality for flexible data handling. Next-generation AI with distributed vector support. PDB Extract Capture for efficient data extraction from Oracle Pluggable Databases. DDL notification on Target Tables for schema evolution management. 
    Support for non-Oracle and Big Data technologies. Online DDL and EBR enhancement for improved performance. Data Streams Pub-Sub for asynchronous data dissemination. Async API support for standardized event communication. High-availability clusters for increased resilience. Trail Files Management for efficient data storage. And support for new features in 23ai database. 
    It also includes integrated diagnostics for improved troubleshooting of IE and IR processes. And 30 or more OS and database certifications for wider platform support. @Dbfunction Mapping for custom data transformations. And lastly, GoldenGate free recipes for pre-built solutions and best practices. 
    New in GoldenGate, distributed AI processing with vector replication. 
    09:37
    Lois: And what type of use cases does this enable?
    Brent: Migrating vectors into Oracle Vector Database. Replicating and consolidating vector changes. Implementing multi-cloud, multi-active Oracle vector databases. Streaming text and vector changes to search engines. 
    Key considerations include that embedding models must be consistent across all vector stores for effective similarity searches. 
    10:09
    Lois: Now, many organizations wonder if they can use generative AI with their own business data. Brent, how do enterprises typically approach this?
    Brent: Organizations are using generative AI typically like this. 
    Building LLMs from scratch. Training models on proprietary data for specific tasks. Fine-tuning LLMs, adapting pre-trained models to a specific domain using private data. And prompt engineering with retrieval augmented generation or RAG. Augmenting prompts with relevant information retrieved from a knowledge base to improve the accuracy and relevance of LLM responses. 
    Now it's possible to create a real-time vector hub for GenAI. This hub can ingest real-time data from various sources, including Oracle and third-party relational databases, vector databases, third-party messaging systems, and NoSQL databases, business updates, documents, events, and alerts. 
    11:11
    Nikita: And how does the vector hub work? 
    Brent: DML and DDL changes, vector changes, and prompt or chat history are used to enrich prompts. And embedding model generates embeddings from the text data. 
    Similarity search is performed on these embeddings to retrieve relevant information from the vector hub. The retrieved information is used to augment the prompt, leading to more accurate and trustworthy answers from the LLM. Now, the benefits of real-time data and generative AI include the ability to ensure answers are based on fresh business data. And helps reduce hallucinations in generative AI responses. 
    Actionable AI and machine learning from streaming pipelines allows data from ERP and SaaS applications, databases, event messaging systems, and NoSQL databases to be ingested into streaming pipelines. This data can then be used for AI and machine learning model training, similarity searches, machine learning tasks, external AI, and machine learning integrations, alerts, and data product creation. 
    12:25
    Lois: So if you had to summarize, Brent, why does GoldenGate 23ai stand out for artificial intelligence workloads?
    Brent: Well, first up, it improves data quality for AI model training and fine-tuning. And secondly, it enhances retrieval augmented generation by providing real-time access to relevant business data, leading to more accurate and trustworthy generative AI responses. 
    Nikita: Thank you, Brent, for sharing your insights and detailing these exciting new features across Oracle's AI stack. If you'd like to dive deeper into these topics, don't forget to visit mylearn.oracle.com and look for Oracle AI Vector Search Deep Dive course. Until next time, this is Nikita Abraham…
    Lois: And Lois Houston, signing off!
    13:16
    That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
  • Oracle University Podcast

    RAG with Oracle AI Vector Search and OCI Generative AI: Python and PL/SQL Approaches

    14/04/2026 | 11min
    In this episode of the Oracle University Podcast, hosts Lois Houston and Nikita Abraham are joined by Brent Dayley, Senior Principal APEX & Apps Dev Instructor. Together, they explore how to implement Retrieval Augmented Generation (RAG) using Oracle AI Vector Search and OCI Generative AI. Brent walks listeners through the similarities and differences between building RAG workflows with Python and PL/SQL, offering practical insights into embedding creation, semantic search, and prompt engineering within Oracle's technology stack.
     
    Oracle AI Vector Search Deep Dive: https://mylearn.oracle.com/ou/course/oracle-ai-vector-search-deep-dive/144706/
    Oracle University Learning Community: https://education.oracle.com/ou-community
    LinkedIn: https://www.linkedin.com/showcase/oracle-university/
    X: https://x.com/Oracle_Edu
     
    Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode.
     
    Please note, this episode was recorded before Oracle AI Database 26ai replaced Oracle Database 23ai. However, all concepts and features discussed remain fully relevant to the latest release.
     
    --------------------------------------------
     
    Episode Transcript:

    00:00
    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!
    00:26
    Lois: Hello and welcome to another episode of the Oracle University Podcast! I'm Lois Houston, Director of Communications and Adoption Programs with Customer Success Services, and with me is Nikita Abraham, Team Lead for Editorial Services with Oracle University. 
    Nikita: Hi everyone! If you joined us last week, you'll remember we explored AI Vector Search and how Retrieval Augmented Generation, or RAG, empowers large language models by surfacing relevant business content for smarter, more context-aware answers.
    Lois: That's right, Niki. We also looked at how unstructured data gets transformed into embeddings, how these vectors power semantic search, and how Oracle Database 23ai is uniquely designed to support these advanced AI workflows.
    Nikita: Today, we're building on that foundation with an exciting double feature. We'll start with an introduction to OCI Generative AI Service and how you can use it with Python, and then dive into Retrieval Augmented Generation with Oracle AI Vector Search and the OCI Gen AI service using PL/SQL.
    01:32
    Lois: And to walk us through these topics, we're delighted to welcome back Brent Dayley, Senior Principal APEX & Apps Dev Instructor. Brent, it's great to have you. So, tell us, how does the OCI Generative AI service use Oracle AI Vector Search?
    Brent: So OCI Generative AI service allows us to take user questions and augment those using external data from outside of the large language model that allows us to return augmented content. 
    We would leverage Oracle AI Vector Search in order to retrieve contextually relevant information. And we would create prompts that have some sort of a meaning to help guide the user to input the appropriate types of questions. And this allows us to retrieve the data using a large language model. 
    02:27
    Nikita: What are the typical steps for implementing a RAG workflow using the OCI Generative AI service in Python?
    Brent: We would load the document. Transform the document to text. And then split the text into chunks. 
    So if you're talking about maybe a PDF that contains chapters, we might split the different chapters into individual chunks. We would then set up Oracle AI Vector Search and insert the embedding vectors. We would build the prompt to query the document. And then we would invoke the chain. 
    So first, you would load the text sources from a file. Open a terminal window and connect to your compute instance. And launch ipython to allow interactive work. 
    Ipython allows you to insert a series of steps in order to put different commands in different steps. You might load the source file called FAQs.
    Next, you would load the FAQ chunks into the Vector Database. You would create a connection and connect to your database. And then create the table. And then you would vectorize the text chunks and then encode the text chunks. And then insert the chunks and vectors into the database. 
    Next, you would vectorize the question. Define the SQL script ordering the results by the calculated score. Define the question. Write the retrieval code. And then execute the code. Finally, you would print the results.
    Then we would create the large language model prompt and call the AI generative LLM. Ensure that our prompt does not exceed the maximum context length of the model. And then define the prompt content. 
    We would then initialize the OCI client and then make the call. 
    04:47
    Here's some exciting news! Oracle University has training to help your teams unlock Redwood—the next-gen design system for Fusion Cloud Applications. Learn how Redwood improves your user experience and discover how to personalize your Fusion investment using Visual Builder Studio. Whatever your role, visit mylearn.oracle.com and check out these courses today! 
    05:12
    Nikita: Thanks, Brent. That gives us a nice overview of how Python can be leveraged with OCI Generative AI. Now, how would you compare working with Python for building RAG applications to using PL/SQL? Can you walk us through the high-level process for building a RAG solution in this environment?
    Brent: First, we would want to load the document. Next, we would transform the document into plain text. After that, we would take that text and split it into meaningful chunks. Next, we would go ahead and set up Oracle AI Vector Search and insert the embedding vectors. We would then build the prompt so that we can query the document. And then we would invoke all of those previous steps as our chain. 
    06:04
    Lois: OK, and can we take a closer look at each of these steps? 
    Brent: Step 1, text extraction and preparation. So, let's imagine we have some sort of document that we want to use as the augmented information. We would load that document. Next, we would transform the document to text. And we have a function in the DBMS Vector Chain Package called util to text. And this is used to extract plain text from the loaded documents. 
    Next, we would want to split the text into meaningful chunks. The DBMS Vector Chain Package has another function called util two chunks, that allows us to divide the extracted text into smaller, more manageable pieces, which we call chunks. 
    07:02
    Nikita: Once we have our text chunks ready, what's the next step to make our data searchable and useful for the large language model?
    Brent: Step number 2, we would want to go ahead and use embedding models in order to create our vectors. We would load multiple ONNX models into the database. And the reason we would do this is because models with a greater number of dimensions usually produce higher quality vector embeddings. 
    So you might want to load multiple different ONNX models into the database so that you can generate embeddings from each of the models, and then compare those vector embeddings using those different models. You would create vector embeddings using PL/SQL packages. 
    07:55
    Lois: After embeddings are created, how does the solution find the most relevant content in response to a user's question?
    Brent: Step 3, we would then go and do a similarity search so that we can return a response. We would select the text chunks that have the relevant information for the input user question based on vector search. This allows for integrating with Oracle's Gen AI Large Language Model Service to generate responses. The process ensures that the large language model generates contextually appropriate and relevant answers for those users' queries. 
    Now, step 4 is to build the prompt, and I want to stress the importance of large language model prompt engineering. What this will do is to carefully craft input queries or instructions so that we can get more accurate and desirable outputs from the large language model. 
    This allows developers to guide the LLM's behavior and tailor its responses to specific requirements. This is what we call LLM Prompt Engineering. And it allows us, as I was saying, to craft input queries or instructions so that we can create more accurate and desirable outputs. 
    Next, we would use an example interactive RAG application that uses the Streamlit framework in order to create a user-friendly interface. This interface will allow us to upload documents, pose the question, and receive relevant answers generated by the underlying RAG pipeline within the database. 
    In the final step, we will have an input prompt that asks us to ask a question about the PDF. We will then type in some sort of a question relative to the PDF content. And then we would retrieve the return data based on the input question. 
    10:11
    Nikita: Brent, thank you for walking us through both the Python and PL/SQL approaches for building RAG solutions with Oracle Generative AI. If you'd like to dive deeper into these topics, don't forget to visit mylearn.oracle.com and look for the Oracle AI Vector Search Deep Dive course. Until next time, this is Nikita Abraham…
    Lois: And Lois Houston, signing off!
    10:33
    That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
  • Oracle University Podcast

    Retrieval Augmented Generation (RAG)

    07/04/2026 | 12min
    Join hosts Lois Houston and Nikita Abraham as they explore one of the most exciting innovations in enterprise AI: Retrieval Augmented Generation (RAG) powered by Oracle AI Vector Search. In this episode, Senior Principal APEX & Apps Dev Instructor Brent Dayley walks through the fundamentals of RAG, explaining how it combines Oracle Database 23ai, vector embeddings, and large language models to deliver accurate, context-rich answers from both business and unstructured data. Discover the typical RAG workflow, practical setup steps on Oracle Cloud Infrastructure, and how to work
    with embedding models for real-world applications.
     
    Oracle AI Vector Search Deep Dive: https://mylearn.oracle.com/ou/course/oracle-ai-vector-search-deep-dive/144706/
    Oracle University Learning Community: https://education.oracle.com/ou-community
    LinkedIn: https://www.linkedin.com/showcase/oracle-university/
    X: https://x.com/Oracle_Edu
     
    Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode.
     
    Please note, this episode was recorded before Oracle AI Database 26ai replaced Oracle Database 23ai. However, all concepts and features discussed remain fully relevant to the latest release.
     
    ----------------------------------------------
     
    Episode Transcript

    00:00
    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!
    00:26
    Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead: Editorial Services with Oracle University, and joining me is Lois Houston, Director of Communications and Adoption Programs with Customer Success Services.
    Lois: Hi everyone! If you've been with us this season, you'll know we've already covered a lot about Oracle AI Vector Search. In Episode 1, we introduced the core concepts—how vectors let you search by meaning, not just keywords, and how embedding models translate your unstructured data into a searchable format inside Oracle Database 23ai. 
    Nikita: Then, in Episode 2, we took a deeper dive into how these vectors are actually stored and managed. We explored the different types of vector indexes, similarity metrics, and best practices for designing and optimizing your database for semantic search. 
    Lois: Right. Today, we're shifting gears into one of the most exciting real-world applications: Retrieval Augmented Generation, or RAG. You'll learn how RAG combines the power of Oracle AI Vector Search with large language models to answer natural language questions using both business and unstructured data. 
    01:39
    Nikita: We'll walk through the workflow, highlight why Oracle Database is uniquely suited for RAG, and give you the essential steps to get started. Back again is Senior Principal APEX & Apps Dev Instructor Brent Dayley. Hi Brent! Could you explain what RAG is, and why it's important for working with AI and large language models?
    Brent: Well, RAG stands for Retrieval Augmented Generation. And this is a technique that allows us to enhance the capabilities of large language models, also known as LLMs, and this provides them with relevant context from external knowledge sources. This will allow the LLMs to generate more accurate, informative, and context-aware responses. Real world applications include answering questions, chatbot development, content summarization, and knowledge discovery. 
    02:35
    Lois: Brent, what makes Oracle Database 23ai a good platform for implementing RAG workflows?
    Brent: Now, there are some key advantages of using Oracle Database 23ai as a RAG platform. These include native functionality, allowing built-in tools and packages specifically designed for RAG pipeline development. 
    Also, if you are a PL/SQL developer, then this will allow you to develop within a familiar and robust database environment. Also, Oracle has a plethora of security and performance tools. And this ensures enhanced security and optimized performance. 
    03:18
    Nikita: What does a typical RAG workflow look like in Oracle Database 23ai? What are the main steps involved?
    Brent: Now, the primary workflow steps are going to be to generate vector embeddings from your unstructured data. You do this using vector embedding models. And you can generate those embeddings either inside or outside of the database. 
    Next, you need to store the vector embeddings, the unstructured data, and the relational business data, and you can store all of that in the Oracle Database. You might want to also create vector indexes that can allow you to run similarity searches over huge vector spaces with really good performance. 
    Finally, you need to query data with similarity searches. You can use Oracle AI Vector Search native SQL operations to combine similarity with relational searches to retrieve relevant data. And optionally, you can generate a prompt and send it to a large language model for full RAG inference. 
    04:30
    Lois: Can you give us an example of how this workflow operates in practice?
    Brent: A user's natural language question is encoded as a vector and sent to AI Vector Search. Next, AI vector search finds private content, such as documents, that are stored in the database, and those will match the user's question. The content is then sent to Oracle's GenAI service to help answer the user's question. And then GenAI uses the content plus general knowledge to provide an informed answer back to the user. 
    05:14
    Nikita: What does the overall user experience look like when interacting with RAG? How does Oracle ensure the answers are both accurate and up to date?
    Brent: In this case, we have a chatbot. This is the interface that we usually use to enable dialogue with the large language model. Now, in order to improve the quality of the answers, we want to search your private business data, and that allows us to pass the most relevant facts back to the LLM. 
    Next, we want to format the similarity search results as a prompt and context for the large language model. Now, this will allow us to use up to date facts as input to LLMs. And that will minimize the probability of the LLM hallucinating. And those high-quality responses are then returned back to the chatbot. 
    06:12
    Lois: Brent, what does the setup process look like for getting RAG up and running with Oracle AI Vector Search on OCI? Can you take us through the main steps?
    Brent: First, you will log into OCI. Provide your cloud account name and click Next. There are also interfaces for signing in using a traditional cloud account. And if you're not an Oracle Cloud customer yet, you can also sign up using this page. 
    Next, after signing in, you will create a compute instance. And you will use Oracle Infrastructure Cloud Console in order to do this. And you will wind up with the user called OPC. You'll notice that you're using SSH in order to connect to your compute instance, and you're running a script in order to set up the Oracle Database. 
    After that, you will set up the Python environment, again using SSH to connect as an OPC user to your compute instance. 
    07:22
    Do you want to optimize your implementation strategies? Check out the Oracle Fusion Cloud Applications Process Essentials training and certifications for insight into key processes and efficiencies across every phase of your Fusion Cloud Apps journey. Learn more at mylearn.oracle.com. 
    07:43
    Nikita: Welcome back! So far, we've seen how Oracle AI Vector Search powers RAG, letting you surface relevant business knowledge for large language models and enhance their answers. At the heart of all this is the process of transforming unstructured data, like text or documents, into mathematical representations called embeddings. 
    Lois: Those embeddings are what make meaningful, semantic search possible. But have you wondered how those embeddings actually get created, or what goes on behind the scenes when you choose an embedding model? 
    Nikita: Up next, we'll take a closer look at embedding models themselves: what they are, how to use them inside Oracle Database 23ai, and how you can experiment with different models to get the results that best fit your business needs. 
    Lois: We'll walk through importing models, generating embeddings, and even how you can swap out embedding models to compare results. But before we get into the nitty-gritty details, let's quickly recap embedding models, since we've mentioned them in our previous episodes. 
    08:47
    Nikita: Brent, for listeners who might need a refresher, can you explain what embedding models are and why they're so central to AI Vector Search? 
    Brent: AI Vector Search is based on similarity properties. You can search data by semantic similarity rather than by the actual values. Vector embeddings are created by embedding models to represent the unstructured data. So we have input data. 
    What we'll want to do is to use an embedding model to generate vector embeddings. And then the vector embeddings would be stored inside of a vector column in a table. We would then compare those vectors to each other using vector distance function. 
    And we would get the relevant content back based on the number of returns that we describe. For instance, maybe we want to bring back the five closest pieces of data compared to the input data. 
    There is a new function that allows you to generate vector embeddings that is called the vector embedding function. It allows you to generate vectors within the database. 
    10:08
    Lois: Can you walk us through the practical steps for using embedding models with Oracle AI Vector Search?
    Brent: In order to create and set up a table, we might use the Python program called create_schema.py. And that will allow us to create a table. 
    We would ensure that the table was successfully created with the data. As an example, I would create a table called MY_DATA. Next, we would use a sentence transformers embedding model in order to vectorize the table. We can use the Python program, vectorize_table_SentenceTransformers.py. We would then query the MY_DATA table in the Oracle Database to verify that the data has been updated. 
    And then we would use sentence transformers in order to perform the similarity search. The Python program is called similarity_search_SentenceTransformers.py And what that would do is create the table and then perform a similarity search using the sentence transformers. Now what if you decide that you want to maybe change embedding models? Maybe you want to compare the results by using one particular model as compared to a different model. 
    So you can change the embedding model. And in order to do that, you would change the embedding model in both of the programs and re-vectorize the table using the vectorize_table_SentenceTransformers.py program. You would then use the new model with different words, possibly, and then compare and review the results, and then choose which one gets you back the data that you're looking for that is most similar. 
    12:02
    Nikita: Well, that's a wrap on this episode. A big thank you, Brent, for sharing your expertise with us. 
    Lois: If you want to learn more about the topics we discussed today, visit to mylearn.oracle.com and search for the Oracle AI Vector Search Deep Dive course. Until next time, this is Lois Houston…
    Nikita: And Nikita Abraham, signing off!
    12:25
    That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
  • Oracle University Podcast

    Inside Oracle AI Vector Search: Indexes, Metrics, and Best Practices

    31/03/2026 | 20min
    Go deeper into Oracle AI Vector Search as hosts Lois Houston and Nikita Abraham, along with Senior Principal APEX & Apps Dev Instructor Brent Dayley, break down how vector indexes, memory requirements, and similarity metrics make fast, powerful semantic search possible in Oracle Database 23ai. Learn about the different types of vector indexes, the VECTOR data type, and how exact and approximate similarity searches work, including best practices for vector management and search performance.
     
    Oracle AI Vector Search Fundamentals:  https://mylearn.oracle.com/ou/course/oracle-ai-vector-search-fundamentals/140188/
    Oracle University Learning Community:  https://education.oracle.com/ou-community
    LinkedIn:  https://www.linkedin.com/showcase/oracle-university/
    X: https://x.com/Oracle_Edu
     
    Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode.
     
    *Please note, this episode was recorded before Oracle AI Database 26ai replaced Oracle Database 23ai. However, all concepts and features discussed remain fully relevant to the latest release.
     
    ----------------------------------------
     
    Episode Transcript:


    00:00
    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!
    00:26
    Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead: Editorial Services with Oracle University, and joining me is Lois Houston, Director of Communications and Adoption Programs with Customer Success Services.
    Lois: Hi everyone! Thanks for joining us again as we continue our exploration into the exciting world of Oracle AI Vector Search. In today's episode, we're taking you inside the technology powering vector search in Oracle Database 23ai. We'll break down core concepts like vector indices, how vectors are stored and managed, and how you can use similarity metrics to unlock new possibilities with your data. 
    01:09
    Nikita: We'll also dig into best practices for handling vectors, everything from memory requirements and table creation to the nuts and bolts of running both exact and approximate similarity searches. Back with us today is Senior Principal APEX & Apps Dev Instructor Brent Dayley. Hi Brent! What exactly are vector indexes?
    Brent: Now, vector indexes are specialized indexing data structures that can make your queries more efficient against your vectors. They use techniques such as clustering, and partitioning, and neighbor graphs. Now, they greatly reduce the search space, which means that your queries happen quicker. They're also extremely efficient. They do require that you enable the vector pool in the SGA. 
    02:06
    Lois: And are there different types of vector indices supported?
    Brent: So, Oracle AI Vector Search supports two types of indexes, in-memory neighbor graph vector index. HNSW is the only type of in-memory neighbor graph vector index that is supported. These are very efficient indexes for vector approximate similarity search. HNSW graphs are structured using principles from small world networks along with layered hierarchical organization. 
    And neighbor partition vector index. Neighbor partition vector index, inverted file flat index, is the only type of neighbor partition index supported. It is a partition-based index which balances high search quality with reasonable speed. 
    In order for you to be able to use vector indexes, you do need to enable the vector pool area. And in order to do that, what you need to do is set the vector memory size parameter. 
    You can set it at the container database level. And the PDB inherits it from the CDB. Now bear in mind that the database does have to be balanced when you set the vector pool. 
    Other considerations, vector indexes are stored in this pool, and vector metadata is also stored here. You do need to restart the database. So large vector indexes do need lots of RAM, and RAM constrains the vector index size. You should use IVF indexes when there is not enough RAM. IVF index is used both the buffer cache as well as disk. 
    04:05
    Lois: Now, memory is definitely a key consideration, right? Can you share more about the memory requirements and considerations for working with vectors?
    Brent: So to remind you, a vector is a numerical representation of text, images, audio, or video that encodes the features or semantic meaning of the data, instead of the actual contents, such as the words or pixels of an image. So the vector is a list of numerical values known as dimensions with a specified format. 
    Now, Oracle does support the int8 format, the float32 format, and the float64 format. Depending on the format depends on the number of bytes. For instance, int8 is one byte, float32 is four bytes. 
    04:56
    Nikita: And how do you calculate the size of a vector?
    Brent: Now, that's going to depend upon the embedding model that you use to create those embeddings. Oracle AI Vector Search supports vectors with up to 65,535 dimensions. As a reminder, vectors are stored in tables and table data is stored on disk. 
    05:19
    Nikita: Let's talk about working with vectors in tables. Can you walk us through how Oracle Database 23ai supports creating tables with vector columns?
    Brent: Now, Oracle Database 23ai does have a new VECTOR data type. The new data type was created in order to support vector search. 
    The definition can include the number of dimensions and can include the format. Bear in mind that either one of those are optional when you define your column. The possible dimension formats are Int, float 32, and float 64. Float 32 and float 64 are IEEE standards, and Oracle Database will automatically cast the value if needed. 
    Let's take a look at some of the declaration examples. Now, if we just do a vector type, then the vectors can have any arbitrary number of dimensions and formats. If we describe the vector type as vector * , *, then that means that vectors can have an arbitrary number of dimensions and formats. Vector and vector * , * are equivalent. Vector with the number of dimensions specified, followed by a comma, and then an asterisk, is equivalent to vector number of dimensions. 
    Vectors must all have the specified number of dimensions, or an error will be thrown. Every vector will have its dimension stored without format modification. And if we do vector asterisk common dimension element format, what that means is that vectors can have an arbitrary number of dimensions, but their format will be up-converted or down-converted to the specified dimension element format, either INT8, float 32, or float 64. 
    07:25
    Lois: Are there any operations or configurations that are prohibited with the VECTOR data type?
    Brent: You cannot define vector columns in or as external tables, index-organized tables, neither as the primary key nor as non-key columns, in clusters or cluster tables, global temporary tables, subpartitioning key, primary key, foreign key, or unique constraint. 
    Additionally, you cannot define vector columns in or as check constraints, default value, modify column, manually segment space manage tablespaces. Only the SYS user can create vectors as basic files in manually segment space manage tablespaces. For continuous query notification queries, or for non-vector indexes such as B-tree, bitmap, reverse key, text, or spatial indexes. Also, bear in mind that Oracle does not support distinct, count distinct, order by, group by, join condition, or comparison operators such as less than, greater than, or equal to with vector columns. 
    08:46
    Have you already nailed the basics of AI? Then it's time to level up. Explore advanced AI with our OCI AI Professional courses and certifications covering Data Science, Generative AI, and AI Vector Search. Are you ready to take the next step? Head over to mylearn.oracle.com and learn more!
    09:12
    Nikita: Welcome back!! Now, let's shift gears and discuss vector search itself. How does one create a vector "on the fly" for testing or learning purposes?
    Brent: Now, the vector constructor is a function that allows us to create vectors without having to store those in a column in a table. These are useful for learning purposes. You use these usually with a smaller number of dimensions. Bear in mind that most embedding models can contain thousands of different dimensions. You get to specify the vector values, and they usually represent two-dimensional like xy coordinates. The dimensions are optional, and the format is optional as well. 
    10:01
    Lois: Once we have vectors, how do we compare them or measure how "close" they are to each other?
    Brent: Now vector distance uses the function VECTOR_DISTANCE as the main function. This allows you to calculate distances between two vectors and therefore takes two vectors as parameters. Optionally, you can specify a metric. If you do not specify a metric, then the default metric, COSINE, would be used. 
    You can optionally use other shorthand functions, too. These include L1 distance, L2 distance, cosine distance, and inner product. All of these functions also take two vectors as input and return the distance between them. Now the VECTOR_DISTANCE function can be used to perform a similarity search. And bear in mind these caveats. If a similarity search query does not specify a distance metric, then the default cosine metric will be used for both exact and approximate searches. 
    If a similarity search does specify a distance metric in the VECTOR_DISTANCE function, then an exact search with that distance metric is used if it conflicts with the distance metric specified in a vector index. If the two distance metrics are the same, then this will be used for both exact as well as approximate searches. 
    11:44
    Nikita: Can you break down the distance metrics we use in Oracle AI Vector Search?
    Brent: We have Euclidean and Euclidean squared distances. We have cosine similarity, dot product similarity, Manhattan distance, and Hamming similarity. Now let's take a closer look at the first of these metrics, Euclidean and Euclidean squared distances. This gives us the straight-line distance between two vectors. It does use the Pythagorean theorem. And notice that it is sensitive to both the vector size as well as the direction. 
    With Euclidean distances, comparing squared distances is equivalent to comparing distances. So when ordering is more important than the distance values themselves, the squared Euclidean distance is very useful as it is faster to calculate than the Euclidean distance, which avoids the square root calculation. 
    12:54
    Lois: Cosine similarity is a term I hear often. How does it work exactly?
    Brent: It is one of the most widely used similarity metrics, especially in natural language processing. The smaller the angle means they are more similar. While cosine distance measures how different two vectors are, cosine similarity measures how similar two vectors are. 
    13:20
    Nikita: Dot product similarity comes up a lot, too. What's its role?
    Brent: Dot product similarity allows us to multiply the size of each vector by the cosine of their angle. The corresponding geometrical interpretation of this definition is equivalent to multiplying the size of one of the vectors by the size of the projection of the second vector onto the first one or vice versa. Larger means that they are more similar. Smaller means that they are less similar. 
    13:58
    Lois: How does Manhattan distance differ from other metrics, and when is it used?
    Brent: This is useful for describing uniform grids. You can imagine yourself walking from point A to point B in a city such as Manhattan. Now, since there are buildings in the way, maybe we need to walk down one street and then turn and walk down the next street in order to get to our result. As you can imagine, this metric is most useful for vectors describing objects on a uniform grid such as city blocks, power grids, or perhaps a chessboard. Now these are faster than the Euclidean metric. 
    14:48
    Nikita: And how is Hamming similarity different from the others?
    Brent: This describes where vector dimensions differ. They are binary vectors, and it tells us the number of bits that require change to match. It compares the position of each bit in the sequence. Now, these are usually used in order to detect network errors. 
    15:17
    Nikita: Now that we've covered the foundations, how do we actually search for the "closest" vectors in our data? What's an exact similarity search?
    Brent: An exact similarity search allows you to calculate the query vector distance to all other vectors. This is also called a flat search or an exact search. This does give you the most accurate results. It gives you perfect search quality. However, you might have potentially long search times. 
    Now, this comparison is done using a particular distance metric. But what is important is the result set of your top closest vectors not the distance between them. 
    Let's take a look at one of the metrics. This one is Euclidean. The Euclidean similarity search retrieves the top k nearest vectors in your space relative to the Euclidean distance metric and a query vector.  
    Now let's take a look at Euclidean squared distance. In the case of Euclidean distances, comparing squared distances is equivalent to comparing distances. So when ordering is more important than the distance values themselves, the Euclidean squared distance is very useful, as it is faster to calculate than the Euclidean distance, avoiding the square-root calculation. 
    16:46
    Lois: How does that compare to approximate searches, which are usually faster, using vector indices?
    Brent: Approximate similarity search is a type of vector search that uses vector indexes. In order to use vector indexes, you have to ensure that you have enabled the vector pool in the SGA. For a vector search to be useful, it needs to be fast and accurate. 
    These types of searches can be more efficient. However, the trade off is that they can be less accurate. Now, approximate searches use vector indexes, and there are many types of approximate searches that you can perform using vector indexes. Vector indexes can be less accurate, but they can consume less resources. Because 100% accuracy cannot be guaranteed by the heuristics, vector index searches use target accuracy. 
    Internally, the algorithms used for both the index creation and index search are doing their best to be as accurate as possible. You do have the option to influence those algorithms by specifying a target accuracy. 
    Let's take a look at vector indexes a little closer. We have two types of vector indexes. We have HNSW indexes, which stand for Hierarchical Navigable Small World index, and we have Inverted File Flat index, or IVF.
    18:23
    Nikita: And for more complex requirements, how does Oracle handle multi-vector similarity search?
    Brent: Multi-vector similarity search is usually used for multi-document search. The documents would be split into chunks. The chunks would be embedded individually into vectors. It does use the concept of groupings called partitions. A multi-vector search consists of retrieving the top K vector matches, using the partitions based on the document's characteristics. 
    The ability to score documents based on the similarity of their chunks to a query vector being searched is facilitated in SQL using the partitioned row-limiting clause. 
    Now, the partition row-limiting clause extension is a generic extension of the SQL language. It does not have to apply to just vector searches. Multi-vector search with the partitioning row limit clause does not use vector indexes.
    19:32
    Lois: We covered quite a lot today! Thanks for that, Brent! If you want to learn more about the topics we discussed today, go to mylearn.oracle.com and search for the Oracle AI Vector Search Fundamentals course. Until next time, this is Lois Houston…
    Nikita: And Nikita Abraham, signing off!
    19:52
    That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
  • Oracle University Podcast

    Introduction to Oracle AI Vector Search

    24/03/2026 | 15min
    Explore Oracle AI Vector Search and learn how to find data by meaning, not just keywords, using powerful vector embeddings within Oracle Database 23ai. In this episode, hosts Lois Houston and Nikita Abraham, along with Senior Principal APEX & Apps Dev Instructor Brent Dayley, break down how similarity search works, the new VECTOR data type, and practical steps for implementing secure, AI-powered search across both structured and unstructured data.
     
    Oracle AI Vector Search Fundamentals: https://mylearn.oracle.com/ou/course/oracle-ai-vector-search-fundamentals/140188/
    Oracle University Learning Community: https://education.oracle.com/ou-community
    LinkedIn: https://www.linkedin.com/showcase/oracle-university/
    X: https://x.com/Oracle_Edu
     
    Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode.
     
    ----------------------------------------------------

    Episode Transcript:
     
    00:00
    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!
    00:26
    Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Communications and Adoption Programs with Customer Success Services, and with me is Nikita Abraham, Team Lead: Editorial Services with Oracle University.
    Nikita: Hi everyone! Today, we're beginning a brand-new season, this time on Oracle AI Vector Search. Whether you're new to vector searches or you've already been experimenting with AI and data, this episode will help you understand why Oracle's approach is such a game-changer.
    Lois: To make sure we're all starting from the same place, here's a quick overview. Oracle AI Vector Search lets you go beyond traditional database searches. Not only can you find data based on specific attribute values or keywords, but you can also search by meaning, using the semantics of your data, which opens up a whole new world of possibilities.
    01:20
    Nikita: That's right, Lois. And guiding us through this episode is Senior Principal APEX & Apps Dev Instructor Brent Dayley. Hi Brent! What's unique about Oracle's approach to vector search? What are the big benefits?
    Brent: Now one of the biggest benefits of Oracle AI Vector Search is that semantic search on unstructured data can be combined with relational search on business data, all in one single system. This is very powerful, and also a lot more effective because you don't need to add a specialized vector database. And this eliminates the pain of data fragmentation between multiple systems. 
    It also supports Retrieval Augmented Generation, also known as RAG. Now this is a breakthrough generative AI technique that combines large language models and private business data. And this allows you to deliver responses to natural language questions. RAG provides higher accuracy and avoids having to expose private data by including it in the large language model training data. 
    02:41
    Lois: OK, and can you explain what the new VECTOR data type is?
    Brent: So, this data type was introduced in Oracle Database 23ai. And it allows you to store vector embeddings alongside other business data. 
    Now, the vector data type allows a foundation to store vector embeddings. This allows you to store your business data in the database alongside your unstructured data, and allows you to use those in your queries. So it allows you to apply semantic queries on business data. 
    03:24
    Lois: For many of our listeners, "vector embeddings" might be a new term. Can you explain what vector embeddings are?
    Brent: Vector embeddings are mathematical representations of data points. They assign mathematical representations based on meaning and context of your unstructured data. 
    You have to generate vector embeddings from your unstructured data either outside or within the Oracle Database. In order to get vector embeddings, you can either use ONNX embedding machine learning models or access third-party REST APIs. 
    Embeddings can be used to represent almost any type of data, including text, audio, or visual such as pictures. And they are used in proximity searches. 
    04:19
    Nikita: Now, searching with these embeddings isn't about looking for exact matches like traditional search, right? This is more about meaning and similarity, even when the words or images differ? Brent, how does similarity search work in this context?
    Brent: So vector data is usually unevenly distributed and clustered. Vector data tends to be unevenly distributed and clustered into groups that are semantically related. Doing a similarity search based on a given query vector is equivalent to retrieving the k nearest vectors to your query vector in your vector space. 
    What this means is that basically you need to find an ordered list of vectors by ranking them, where the first row is the closest or most similar vector to the query vector. The second row in the list would be the second closest vector to the query vector, and so on, depending on your data set. What we need to do is to find the relative order of distances. And that's really what matters rather than the actual distance. 
    Now, similarity searches tend to get data from one or more clusters, depending on the value of the query vector and the fetch size. Approximate searches using vector indexes can limit the searches to specific clusters. Exact searches visit vectors across all clusters. 
    05:51
    Lois: Let's talk about how we actually convert information into these vectors. There are models behind the scenes, right? Kind of like translators between words, images, and numbers. Brent, what embedding models does Oracle support, and how do they handle different data types?
    Brent: Vector embedding models allow you to assign meaning to what a word, or a sentence, or the pixels in an image, or perhaps audio. What that actually means? It allows you to quantify features or dimensions. 
    Most modern vector embeddings use a transformer model. Bear in mind that convolutional neural networks can also be used. Depending on the type of your data, you can use different pretrained open-source models to create vector embeddings. As an example, for textual data, sentence transformers can transform words, sentences, or paragraphs into vector embeddings. 
    For visual data, you can use residual network, also known as ResNet, to generate vector embeddings. You can also use visual spectrogram representation for audio data. And that allows us to use the audio data to fall back into the visual data case. Now, these can also be based on your own data set. Each model also determines the number of dimensions for your vectors. 
    As an example, Cohere's embedding model, embed English version 3.0, has 1,024 dimensions. Open AI's embedding model, text-embedding-3-large, has 3,072 dimensions. 
    07:45
    Nikita: For organizations ready to put this into practice, there's the question of how to get the models up and running inside Oracle Database. Can you walk us through how these models are brought into Oracle Database?
    Brent: Although you can generate vector embeddings outside the Oracle Database using pre-trained open-source embeddings or your own embedding models, you also have the option of doing those within the Oracle Database. In order to use those within the Oracle Database, you need to use models that are compatible with the Open Neural Network Exchange Standard, or ONNX, also known as onn-ex. 
    Oracle Database implements an ONNX runtime directly within the database, and this is going to allow you to generate vector embeddings directly inside the Oracle Database using SQL. 
    08:41
    AI is transforming every industry. So, it's no wonder that AI skills are the most sought-after by employers. If you're ready to dive into AI, check out the OCI AI Foundations training and certification that's available for free! It's the perfect starting point to build your AI knowledge. Head over to mylearn.oracle.com to kickstart your AI journey today!
    09:06
    Nikita: Welcome back! Let's make this practical. Imagine I'm setting this up for the first time. What are the big steps? Can you walk us through the end-to-end workflow using Oracle AI Vector Search?
    Brent: Generate vector embeddings from your data, either outside the database or within the database. Now, embeddings are a mathematical representation of what your data meaning is. So, what does this long sentence mean, for instance? What are the main keywords out of it?
    You can also generate embeddings not only on your typical string type of data, but you can also generate embeddings on other types of data, such as pictures or perhaps maybe audio wavelengths. 
    Maybe we want to convert text strings to embeddings or convert files into text. And then from text, maybe we can chunk that up into smaller chunks and then generate embeddings on those chunks. Maybe we want to convert files to embeddings, or maybe we want to use embeddings for end-to-end search. 
    Now you have to generate vector embeddings from your unstructured data, as we mentioned, either outside or within the Oracle Database. You can either use the ONNX embedding machine learning models or you can access third-party REST APIs. 
    You can import pretrained models in ONNX format for vector generation within the database. You can download pretrained embedding machine learning models, convert them into the ONNX format if they are not already in that format. Then you can import those models into the Oracle Database and generate vector embeddings from your data within the database. 
    Oracle also allows you to convert pre-trained models to the ONNX format using Oracle machine learning for Python. This enables the use of text transformers from different companies. 
    11:36
    Nikita: Once those embeddings are generated, what's the next step? 
    Brent: Store vector embeddings. So you can create one or more columns of the vector data type in your standard relational data tables. You can also store those in secondary tables that are related to the primary tables using primary key foreign key relationships. 
    You can store vector embeddings on structured data and relational business data in the Oracle Database. You do store the resulting vector embeddings and associated unstructured data with your relational business data inside the Oracle Database. 
    12:17
    Lois: And when do vector indexes come into play? 
    Brent: Now you may want to create vector indexes in the event that you have huge vector spaces. This is an optional step, but this is beneficial for running similarity searches over those huge vector spaces. 
    12:38
    Nikita: Now, once all of that is in place, how do users perform similarity searches? 
    Brent: So once you have generated the vector embeddings and stored those vector embeddings and possibly created the vector indexes, you can then query your data with similarity searches. This allows for native SQL operations and allows you to combine similarity searches with relational searches in order to retrieve relevant data. 
    So let's take a look at the combined complete workflow. Step number one, generate the vector embeddings from your unstructured data. Step number two, store the vector embeddings. Step number three, create vector indexes. And step number four, combine similarity and keyword searches. 
    Now there is another optional step. You could generate a prompt and send it to a large language model for a full RAG inference. You can use the similarity search results to generate a prompt and send it to your generative large language model in order to complete your RAG pipeline. 
    14:07
    Lois: Thanks for that detailed walk-through, Brent. To sum up, today we introduced Oracle AI Vector Search, discussed its core concepts, data types, embedding models, and the complete workflow you'll use to get real value out of your business data, securely and efficiently. 
    Nikita: If you want to learn more about the topics we discussed today, go to mylearn.oracle.com and search for the Oracle AI Vector Search Fundamentals course. And if you're feeling inspired to try this out for yourself, don't forget to check out the Oracle Database 23ai SQL Workshop for hands-on training. Until next time, this is Nikita Abraham…
    Lois: And Lois Houston, signing off!
    14:49
    That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Mais podcasts de Ensino

Sobre Oracle University Podcast

Oracle University Podcast delivers convenient, foundational training on popular Oracle technologies such as Oracle Cloud Infrastructure, Java, Autonomous Database, and more to help you jump-start or advance your career in the cloud.
Site de podcast

Ouça Oracle University Podcast, 6 Minute English e muitos outros podcasts de todo o mundo com o aplicativo o radio.net

Obtenha o aplicativo gratuito radio.net

  • Guardar rádios e podcasts favoritos
  • Transmissão via Wi-Fi ou Bluetooth
  • Carplay & Android Audo compatìvel
  • E ainda mais funções

Oracle University Podcast: Podcast do grupo

Informação legal
Aplicações
Social
v8.8.11| © 2007-2026 radio.de GmbH
Generated: 4/22/2026 - 11:17:35 PM