PodcastsEnsinoOracle University Podcast

Oracle University Podcast

Oracle Corporation
Oracle University Podcast
Último episódio

165 episódios

  • Oracle University Podcast

    Encore: Cloud Data Centers - Core Concepts Part 2

    05/05/2026 | 14min
    Have you ever wondered where all your digital memories, work projects, or favorite photos actually live in the cloud?
    In this episode, Lois Houston and Nikita Abraham discuss cloud storage.
    They explore how data is carefully organized, the different ways it can be stored—whether right next to the server or across the network—and what keeps it safe and easy to find.
     
    Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992
    Oracle University Learning Community: https://education.oracle.com/ou-community
    LinkedIn: https://www.linkedin.com/showcase/oracle-university/
    X: https://x.com/Oracle_Edu
     
    Special thanks to Arijit Ghosh, Anna Hulkower, Radhika Banka, and the OU Studio Team for helping us create this episode.


    ------------------------------------------------------
     
    Episode Transcript: 
     
    00:00
    Hi there! We're hitting rewind for the next few weeks and bringing back some of our most popular episodes. So, sit back and enjoy these highlights from our archive.
    00:12
    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!
    00:38
    Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs.
    Lois: Hey there! Last week, we spoke about the differences between traditional and cloud data centers, and covered components like CPU, RAM, and operating systems. If you haven't listened to the episode yet, I'd suggest going back and listening to it before you dive into this one. 
    Nikita: Joining us again is Orlando Gentil, Principal OCI Instructor at Oracle University, and we're going to ask him about another fundamental concept: storage.
    01:16
    Lois: That's right, Niki. Hi Orlando! Thanks for being with us again today. You introduced cloud data centers last week, but tell us, how is data stored and accessed in these centers? 
    Orlando: At a fundamental level, storage is where your data resides persistently. Data stored on a storage device is accessed by the CPU and, for specialized tasks, the GPU. The RAM acts as a high-speed intermediary, temporarily holding data that the CPU and the GPU are actively working on. This cyclical flow ensures that applications can effectively retrieve, process, and store information, forming the backbone for our computing operations in the data center.
    02:05
    Nikita: But how is data organized and controlled on disks?
    Orlando: To effectively store and manage data on physical disks, a structured approach is required, which is defined by file systems and permissions. The process began with disks. These are the raw physical storage devices.
    Before data can be written to them, disks are typically divided into partitions. A partition is a logical division of a physical disk that acts as if it were a separated physical disk. This allows you to organize your storage space and even install multiple operating systems on a single drive.
    Once partitions are created, they are formatted with a file system.
    02:53
    Nikita: Ok, sorry but I have to stop you there. Can you explain what a file system is? And how is data organized using a file system? 
    Orlando: The file system is the method and the data structure that an operating system uses to organize and manage files on storage devices. It dictates how data is named, is stored, retrieved, and managed on the disk, essentially providing the roadmap for data. Common file systems include NTFS for Windows and ext4 or XFS for Linux.
    Within this file system, data is organized hierarchically into directories, also known as folders. These containers help to logically group related files, which are the individual units of data, whether they are documents, images, videos, or applications. Finally, overseeing this entire organization are permissions. 
    03:55
    Lois: And what are permissions?
    Orlando: Permissions define who can access a specific files and directories and what actions they are allowed to perform-- for example, read, write, or execute.
    This access control, often managed by user, group, and other permissions, is fundamental for security, data integrity, and multi-user environments within a data center. 
    04:21
    Lois: Ok, now that we have a good understanding of how data is organized logically, can we talk about how data is stored locally within a server?  
    Orlando: Local storage refers to storage devices directly attached to a server or computer. The three common types are Hard Disk Drive. These are traditional storage devices using spinning platters to store data. They offer large capacity at a lower cost per gigabyte, making them suitable for bulk data storage when high performance isn't the top priority.
    Unlike hard disks, solid state drives use flash memory to store data, similar to USB drives but on a larger scale. They provide significantly faster read and write speeds, better durability, and lower power consumption than hard disks, making them ideal for operating systems, applications, and frequently accessed data.
    Non-Volatile Memory Express is a communication interface specifically designed for solid state that connects directly to the PCI Express bus. NVME offers even faster performance than traditional SATA-based solid state drives by reducing latency and increasing bandwidth, making it the top choice for demanding workloads that require extreme speed, such as high-performance databases and AI applications. Each type serves different performance and cost requirements within a data center. While local storage is essential for immediate access, data center also heavily rely on storage that isn't directly attached to a single server. 
    06:11
    Lois: I'm guessing you're hinting at remote storage. Can you tell us more about that, Orlando?
    Orlando: Remote storage refers to data storage solutions that are not physically connected to the server or client accessing them. Instead, they are accessed over the network. This setup allows multiple clients or servers to share access to the same storage resources, centralizing data management and improving data availability. This architecture is fundamental to cloud computing, enabling vast pools of shared storage that can be dynamically provisioned to various users and applications.
    06:48
    Lois: Let's talk about the common forms of remote storage. Can you run us through them?
    Orlando: One of the most common and accessible forms of remote storage is Network Attached Storage or NAS. NAS is a dedicated file storage device connected to a network that allows multiple users and client devices to retrieve data from a centralized disk capacity. It's essentially a server dedicated to serving files.
    A client connects to the NAS over the network. And the NAS then provides access to files and folders. NAS devices are ideal for scenarios requiring shared file access, such as document collaboration, centralized backups, or serving media files, making them very popular in both home and enterprise environments. While NAS provides file-level access over a network, some applications, especially those requiring high performance and direct block level access to storage, need a different approach. 
    07:50
    Nikita: And what might this approach be? 
    Orlando: Internet Small Computer System Interface, which provides block-level storage over an IP network.
    iSCSI or Internet Small Computer System Interface is a standard that allows the iSCSI protocol traditionally used for local storage to be sent over IP networks. Essentially, it enables servers to access storage devices as if they were directly attached even though they are located remotely on the network. 
    This means it can leverage standard ethernet infrastructure, making it a cost-effective solution for creating high performance, centralized storage accessible over an existing network. It's particularly useful for server virtualization and database environments where block-level access is preferred. While iSCSI provides block-level access over standard IP, for environments demanding even higher performance, lower latency, and greater dedicated throughput, a specialized network is often deployed. 
    08:59
    Nikita: And what's this specialized network called?
    Orlando: Storage Area Network or SAN. A Storage Area Network or SAN is a high-speed network specifically designed to provide block-level access to consolidated shared storage. Unlike NAS, which provides file level access, a SAN presents a storage volumes to servers as if they were local disks, allowing for very high performance for applications like databases and virtualized environments. While iSCSI SANs use ethernet, many high-performance SANs utilize fiber channel for even faster and more reliable data transfer, making them a cornerstone of enterprise data centers where performance and availability are paramount.
    09:56
    Do you want to master Oracle Database on AWS? Check out the Oracle Database@AWS course, where you'll learn provisioning, migration, security, and high availability. Validate your new skills with a certification and stand out in the multicloud space. Visit mylearn.com to learn more! 
    10:23
    Nikita: Welcome back! Orlando, are there any other popular storage paradigms we should know about?
    Orlando: Beyond file level and block level storage, cloud environments have popularized another flexible and highly scalable storage paradigm, object storage. 
    Object storage is a modern approach to storing data, treating each piece of data as a distinct, self-contained unit called an object. Unlike file systems that organize data in a hierarchy or block storage that breaks data into fixed size blocks, object storage manages data as flat, unstructured objects. Each object is stored with unique identifiers and rich metadata, making it highly scalable and flexible for massive amounts of data.
    This service handles the complexity of storage, providing access to vast repositories of data. Object storage is ideal for use cases like cloud-native applications, big data analytics, content distribution, and large-scale backups thanks to its immense scalability, durability, and cost effectiveness. While object storage is excellent for frequently accessed data in rapidly growing data sets, sometimes data needs to be retained for very long periods but is accessed infrequently. For these scenarios, a specialized low-cost storage tier, known as archive storage, comes into play.
    11:59
    Lois: And what's that exactly?
    Orlando: Archive storage is specifically designed for long-term backup and retention of data that you rarely, if ever, access. This includes critical information, like old records, compliance data that needs to be kept for regulatory reasons, or disaster recovery backups. The key characteristics of archive storage are extremely low cost per gigabyte, achieved by optimizing for infrequent access rather than speed. Historically, tape backup systems were the common solution for archiving, where data from a data center is moved to tape. In modern cloud environments, this has evolved into cloud backup solutions. Cloud-based archiving leverages high-cost, effective during cloud storage tiers that are purpose built for long term retention, providing a scalable and often more reliable alternative to physical tapes.
    13:01
    Lois: Thank you, Orlando, for taking the time to talk to us about the hardware and software layers of cloud data centers. This information will surely help our listeners to make informed decisions about cloud infrastructure to meet their workload needs in terms of performance, scalability, cost, and management. 
    Nikita: That's right, Lois. And if you want to learn more about what we discussed today, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. 
    Lois: In our next episode, we'll take a look at more of the fundamental concepts within modern cloud environments, such as Hypervisors, Virtualization, and more. I can't wait to learn more about it. Until then, this is Lois Houston…
    Nikita: And Nikita Abraham, signing off!
    13:44
    That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
  • Oracle University Podcast

    Encore: Cloud Data Centers - Core Concepts Part 1

    28/04/2026 | 16min
    Curious about what really goes on inside a cloud data center?
     
    In this episode, Lois Houston and Nikita Abraham dive into how cloud data centers are transforming the way organizations manage technology.
    They explore the differences between traditional and cloud data centers, the roles of CPUs, GPUs, and RAM, and why operating systems and remote access matter more than ever.
     
    Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992
    Oracle University Learning Community: https://education.oracle.com/ou-community
    LinkedIn: https://www.linkedin.com/showcase/oracle-university/
    X: https://x.com/Oracle_Edu
     
    Special thanks to Arijit Ghosh, Anna Hulkower, Radhika Banka, and the OU Studio Team for helping us create this episode.
     
    --------------------------------------------------------
     
    Episode Transcript:
     
    00:00
    Hi there! We're hitting rewind for the next few weeks and bringing back some of our most popular episodes. So, sit back and enjoy these highlights from our archive.
    00:12
    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!
    00:37
    Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services.  
    Nikita: Hi everyone! Today, we're covering the fundamentals you need to be successful in a cloud environment. If you're new to cloud, coming from a SaaS environment, or planning to move from on-premises to the cloud, you won't want to miss this. With us today is Orlando Gentil, Principal OCI Instructor at Oracle University. Hi Orlando! Thanks for joining us.  
    01:13
    Lois: So Orlando, we know that Oracle has been a pioneer of cloud technologies and has been pivotal in shaping modern cloud data centers, which are different from traditional data centers. For our listeners who might be new to this, could you tell us what a traditional data center is? 
    Orlando: A traditional data center is a physical facility that houses an organization's mission critical IT infrastructure, including servers, storage systems, and networking equipment, all managed on site.  
    01:44
    Nikita: So why would anyone want to use a cloud data center? 
    Orlando: The traditional model requires significant upfront investment in physical hardware, which you are then responsible for maintaining along with the underlying infrastructure like physical security, HVAC, backup power, and communication links. 
    In contrast, cloud data centers offer a more agile approach. You essentially rent the infrastructure you need, paying only for what you use. In the traditional data center, scaling resources up and down can be a slow and complex process. 
    On cloud data centers, scaling is automated and elastic, allowing resources to adjust dynamically based on demand. This shift allows business to move their focus from the constant upkeep of infrastructure to innovation and growth. 
    The move represents a shift from maintenance to momentum, enabling optimized costs and efficient scaling. This fundamental shift is how IT infrastructure is managed and consumed, and precisely what we mean by moving to the cloud. 
    02:52
    Lois: So, when we talk about moving to the cloud, what does it really mean for businesses today? 
    Orlando: Moving to the cloud represents the strategic transition from managing your own on-premise hardware and software to leveraging internet-based computing services provided by a third-party. 
    This involves migrating your applications, data, and IT operations to a cloud environment. This transition typically aims to reduce operational overhead, increase flexibility, and enhance scalability, allowing organizations to focus more on their core business functions.   
    03:29
    Nikita: Orlando, what's the "brain" behind all this technology? 
    Orlando: A CPU or Central Processing Unit is the primary component that performs most of the processing inside the computer or server. It performs calculations handling the complex mathematics and logic that drive all applications and software. 
    It processes instructions, running tasks, and operations in the background that are essential for any application. A CPU is critical for performance, as it directly impacts the overall speed and efficiency of the data center. 
    It also manages system activities, coordinating user input, various application tasks, and the flow of data throughout the system. Ultimately, the CPU drives data center workloads from basic server operations to powering cutting edge AI applications. 
    04:23
    Lois: To better understand how a CPU achieves these functions and processes information so efficiently, I think it's important for us to grasp its fundamental architecture. Can you briefly explain the fundamental architecture of a CPU, Orlando? 
    Orlando: When discussing CPUs, you will often hear about sockets, cores, and threads. A socket refers to the physical connection on the motherboard where a CPU chip is installed. 
    A single server motherboard can have one or more sockets, each holding a CPU. A core is an independent processing unit within a CPU. Modern CPUs often have multiple cores, enabling them to handle several instructions simultaneously, thus increasing processing power. 
    Think of it as having multiple mini CPUs on a single chip. Threads are virtual components that allow a single CPU core to handle multiple sequence of instructions or threads concurrently. This technology, often called hyperthreading, makes a single core appear as two logical processors to the operating system, further enhancing efficiency. 
    05:39
    Lois: Ok. And how do CPUs process commands? 
    Orlando: Beyond these internal components, CPUs are also designed based on different instruction set architectures which dictate how they process commands.  
    CPU architectures are primarily categorized in two designs-- Complex Instruction Set Computer or CISC and Reduced Instruction Set Computer or RISC. CISC processors are designed to execute complex instructions in a single step, which can reduce the number of instructions needed for a task, but often leads to a higher power consumption. These are commonly found in traditional Intel and AMD CPUs. 
    In contrast, RISC processors use a simpler, more streamlined set of instructions. While this might require more steps for a complex task, each step is faster and more energy efficient. This architecture is prevalent in ARM-based CPUs. 
    06:47
    Are you looking to boost your expertise in enterprise AI? Check out the Oracle AI Agent Studio for Fusion Applications Developers course and professional certification, now available through Oracle University. This course helps you build, customize, and deploy AI Agents for Fusion HCM, SCM, and CX, with hands-on labs and real-world case studies. Ready to set yourself apart with in-demand skills and a professional credential? Learn more and get started today! Visit mylearn.oracle.com for more details.  
     
    07:22
    Nikita: Welcome back! We were discussing CISC and RISC processors. So Orlando, where are they typically deployed? Are there any specific computing environments and use cases where they excel? 
    Orlando: On the CISC side, you will find them powering enterprise virtualization and server workloads, such as bare metal hypervisors in large databases where complex instructions can be efficiently processed. High performance computing that includes demanding simulations, intricate analysis, and many traditional machine learning systems. 
    Enterprise software suites and business applications like ERP, CRM, and other complex enterprise systems that benefit from fewer steps per instruction. Conversely, RISC architectures are often preferred for cloud-native workloads such as Kubernetes clusters, where simpler, faster instructions and energy efficiency are paramount for distributed computing. 
    Mobile device management and edge computing, including cell phones and IoT devices where power efficiency and compact design are critical. Cost optimized cloud hosting supporting distributed workloads where the cumulative energy savings and simpler design lead to more economical operations. 
    The choice between CISC and RISC depends heavily on the specific workload and performance requirements. While CPUs are versatile generalists, handling a broad range of tasks, modern data centers also heavily rely on another crucial processing unit for specialized workloads. 
    09:07
    Lois: We've spoken a lot about CPUs, but our conversation would be incomplete without understanding what a Graphics Processing Unit is and why it's important. What can you tell us about GPUs, Orlando? 
    Orlando: A GPU or Graphics Processing Unit is distinct from a CPU. While the CPU is a generalist excelling at sequential processing and managing a wide variety of tasks, the GPU is a specialist. 
    It is designed specifically for parallel compute heavy tasks. This means it can perform many calculations simultaneously, making it incredibly efficient for workloads like rendering graphics, scientific simulations, and especially in areas like machine learning and artificial intelligence, where massive parallel computation is required. 
    In the modern data center, GPUs are increasingly vital for accelerating these specialized, data intensive workloads.  
    10:11
    Nikita: Besides the CPU and GPU, there's another key component that collaborates with these processors to facilitate efficient data access. What role does Random Access Memory play in all of this? 
    Orlando: The core function of RAM is to provide faster access to information in use. Imagine your computer or server needing to retrieve data from a long-term storage device, like a hard drive. This process can be relatively slow. 
    RAM acts as a temporary high-speed buffer. When your CPU or GPU needs data, it first checks RAM. If the data is there, it can be accessed almost instantaneously, significantly speeding up operations. 
    This rapid access to frequently used data and programming instructions is what allows applications to run smoothly and systems to respond quickly, making RAM a critical factor in overall data center performance. 
    While RAM provides quick access to active data, it's volatile, meaning data is lost when power is off, or persistent data storage, the information that needs to remain available even after a system shut down.  
    11:26
    Nikita: Let's now talk about operating systems in cloud data centers and how they help everything run smoothly. Orlando, can you give us a quick refresher on what an operating system is, and why it is important for computing devices? 
    Orlando: At its core, an operating system, or OS, is the fundamental software that manages all the hardware and software resources on a computer. Think of it as a central nervous system that allows everything else to function. 
    It performs several critical tasks, including managing memory, deciding which programs get access to memory and when, managing processes, allocating CPU time to different tasks and applications, managing files, organizing data on storage devices, handling input and output, facilitate communication between the computer and its peripherals, like keyboards, mice, and displays. And perhaps, most importantly, it provides the user interface that allows us to interact with the computer. 
    12:31
    Lois: Can you give us a few examples of common operating systems? 
    Orlando: Common operating system examples you are likely familiar with include Microsoft Windows and MacOS for personal computers, iOS and Android for mobile devices, and various distributions of Linux, which are incredibly prevalent in servers and increasingly in cloud environments. 
    12:54
    Lois: And how are these operating systems specifically utilized within the demanding environment of cloud data centers? 
    Orlando: The two dominant operating systems in data centers are Linux and Windows. Linux is further categorized into enterprise distributions, such as Oracle Linux or SUSE Linux Enterprise Server, which offer commercial support and stability, and community distributions, like Ubuntu and CentOS, which are developed and maintained by communities and are often free to use. 
    On the other side, we have Windows, primarily represented by Windows Server, which is Microsoft's server operating system known for its robust features and integration with other Microsoft products. While both Linux and Windows are powerful operating systems, their licensing modes can differ significantly, which is a crucial factor to consider when deploying them in a data center environment. 
    13:55
    Nikita: In what way do the licensing models differ? 
    Orlando: When we talk about licensing, the differences between Linux and Windows become quite apparent. For Linux, Enterprise Distributions come with associated support fees, which can be bundled into the initial cost or priced separately. These fees provide access to professional support and updates. On the other hand, Community Distributions are typically free of charge, with some providers offering basic community-driven support. 
    Windows server, in contrast, is a commercial product. Its license cost is generally included in the instance cost when using cloud providers or purchased directly for on-premise deployments. It's also worth noting that some cloud providers offer a bring your own license, or BYOL program, allowing organizations to use their existing Windows licenses in the cloud, which can sometimes provide cost efficiencies. 
    14:58
    Nikita: Beyond choosing an operating system, are there any other important aspects of data center management? 
    Orlando: Another critical aspect of data center management is how you remotely access and interact with your servers. Remote access is fundamental for managing servers in a data center, as you are rarely physically sitting in front of them. The two primary methods that we use are SSH, or secure shell, and RDP, remote desktop. 
    Secure shell is widely used for secure command line access for Linux servers. It provides an encrypted connection, allowing you to execute commands, transfer files, and manage your servers securely from a remote location. The remote desktop protocol is predominantly used for graphical remote access to Windows servers. RDP allows you to see and interact with the server's desktop interface, just as if you were sitting directly in front of it, making it ideal for tasks that require a graphical user interface. 
    16:06
    Lois: Thank you so much, Orlando, for shedding light on this topic.   
    Nikita: Yeah, that's a wrap for today! To learn more about what we discussed, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. In our next episode, we'll take a close look at how data is stored and managed. Until then, this is Nikita Abraham…  
    Lois: And Lois Houston, signing off!  
    16:28
    That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
  • Oracle University Podcast

    Vector AI Supporting Features: What's New in Oracle Exadata and GoldenGate

    22/04/2026 | 13min
    Hosts Lois Houston and Nikita Abraham are joined by Brent Dayley, Senior Principal APEX and Apps Dev Instructor, to explore the latest vector AI supporting features in Oracle Exadata and GoldenGate 23ai. The conversation begins with an overview of Exadata's capabilities and then shifts to how GoldenGate is powering distributed AI, real-time data streaming, and analytics with advanced microservices architecture. Brent highlights recent GoldenGate enhancements, including distributed vector support, robust monitoring, OCI IAM integration, and support for next-generation AI workloads via real-time vector hubs.
     
    Oracle AI Vector Search Deep Dive: https://mylearn.oracle.com/ou/course/oracle-ai-vector-search-deep-dive/144706/
    Oracle University Learning Community: https://education.oracle.com/ou-community
    LinkedIn: https://www.linkedin.com/showcase/oracle-university/
    X: https://x.com/Oracle_Edu
     
    Special thanks to Arijit Ghosh, Anna Hulkower, and the OU Studio Team for helping us create this episode.
     
    Please note, this episode was recorded before Oracle AI Database 26ai replaced Oracle Database 23ai. However, all concepts and features discussed remain fully relevant to the latest release.
     
    -------------------------------------------------------
     
    Episode Transcript:
     
    00:00
    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!
    00:26
    Lois: Hello and welcome to another episode of the Oracle University Podcast! I'm Lois Houston, Director of Communications and Adoption Programs with Customer Success Services, and with me is Nikita Abraham, Team Lead of Editorial Services with Oracle University. 
    Nikita: Hi everyone! Thanks for joining us! In our previous episode of this series, we took a deep dive into Oracle AI Vector Search and Retrieval Augmented Generation, or RAG, showing how unstructured data can be transformed into embeddings to power smarter, more context-aware AI with Oracle Database 23ai.
    Lois: That's right, Niki. We also explored how the OCI Generative AI service can be used with both Python and PL/SQL, and how AI Vector Search enables relevant information retrieval for large language model prompts.
    01:21
    Nikita: Today, we're focusing on the latest supporting features for Oracle AI Vector Search. Joining us once again is Brent Dayley, Senior Principal APEX and Apps Dev Instructor. Welcome back, Brent! To kick things off, could you outline what's new in Exadata with the 24ai release, particularly for AI storage?
    Brent: So Exadata has ushered in a new era of AI capabilities with 24ai release. Key features of Exadata system software 24ai include AI Smart Scan, Exadata RDMA Memory, known as XRMEM, Exadata Smart Flash Cache, and on-storage processing. 
    In-Memory Columnar Speed JSON Queries, Transparent Cross-Tier Scans, and caching enhancements, including Columnar Smart Scan at Memory Speed, Exadata Cache Observability, and Automatic KEEP Object Load into Exadata Flash Cache. 
    Now, Exadata system software 24ai is a significant release. It ushers in a new era of AI capabilities for Oracle Database users. 
    Now there have been some infrastructure improvements, including the ability to increase the number of virtual machines on X10M and Secure Boot for KVM Virtual Machines. 
    We have also improved and enhanced high availability and network resilience, including improved RoCE Network Resilience and enhanced RoCE Network Discovery. There have been some enhancements for monitoring and management, including AWR and SQL Monitor Enhancements and JSON API for Management Server. 
    Additionally, security enhancement. SNMP Security. Now, Exadata system software 24ai is supported on Exadata database machines and storage expansion racks from X6 and newer. 
    03:40
    Lois: Those are some fantastic advancements for Exadata users. Now, let's pivot to distributed AI. Brent, can you walk us through how GoldenGate enables distributed AI?
    Brent: Let's take a look at some common GoldenGate use cases as a refresher. The first use case is multi-active, high availability, and cross-region deployments, spanning on-premises and cloud environments. 
    Another use case includes data offloading and data hub creation in order to support multiple downstream applications. Real-time data stores for Downstream Marts and Analytics. Micro and mini services architecture and an audit history of transactions. 
    Other use cases include migrations and upgrades of databases, including OCI-hosted databases. Another use case would be creating analytic data feeds for various applications, including SaaS and on-premises apps. And finally, stream analytics using application and transaction events captured by GoldenGate Stream Analytics. 
    05:03
    Nikita: We know GoldenGate has long been a staple for enterprise data integration. So Brent, what makes GoldenGate the best choice today, and how has its architecture evolved?
    Brent: It offers DIY Stream Analytics. GoldenGate does remain the top choice for Enterprise Standard, real-time data streaming. It supports Oracle and third-party databases, vector sources, messaging systems, and NoSQL databases. 
    OCI offers a fully managed pipeline builder for Stream Analytics. This pipeline leverages various OCI services, such as OCI Streaming for real-time event ingestion, OCI Dataflow for stream processing, OCI Big Data for data storage and processing, and OCI Stream Analytics for real-time event processing and analysis. 
    GoldenGate microservices, available since 2017 in Oracle GoldenGate 12.3, is used in over 4,000 deployments in OCI. Benefits of GoldenGate microservices include the ability to employ the same trusted Extract and Replicat processes as the classic architecture. 
    Provides flexible and secure remote administration through a user-friendly web interface or CLI. Deployable on-premises in OCI as a service and in third-party cloud environments. Simplified patching and upgrading process. 
    Now the GoldenGate architecture evolution. First, classic architecture that was deprecated in version 19c and desupported in 23ai. Microservices Architecture introduced in version 12.3 and is the recommended architecture. A migration utility is available to upgrade from classic to microservices architecture. 
    07:12
    Are you ready to create and manage AI Agents in Fusion Applications? Check out the Oracle AI Agent Studio for Fusion Applications courses! Start with the Foundations course to build, customize, and deploy AI Agents, and then advance to the Developer Professional certification. Explore hands-on labs and real-world case studies. Visit mylearn.oracle.com for all the details. 
    07:39
    Nikita: Welcome back! It sounds like the latest GoldenGate updates offer new features and integrations. Could you share more about these enhancements?
    Brent: There are many new features and enhancements in GoldenGate, along with microservices, including a redesigned GUI for enhanced usability. Integration with StatsD and Telegraf for monitoring and metrics. OCI IAM integration for secure access control. 
    JSON Relational Duality for flexible data handling. Next-generation AI with distributed vector support. PDB Extract Capture for efficient data extraction from Oracle Pluggable Databases. DDL notification on Target Tables for schema evolution management. 
    Support for non-Oracle and Big Data technologies. Online DDL and EBR enhancement for improved performance. Data Streams Pub-Sub for asynchronous data dissemination. Async API support for standardized event communication. High-availability clusters for increased resilience. Trail Files Management for efficient data storage. And support for new features in 23ai database. 
    It also includes integrated diagnostics for improved troubleshooting of IE and IR processes. And 30 or more OS and database certifications for wider platform support. @Dbfunction Mapping for custom data transformations. And lastly, GoldenGate free recipes for pre-built solutions and best practices. 
    New in GoldenGate, distributed AI processing with vector replication. 
    09:37
    Lois: And what type of use cases does this enable?
    Brent: Migrating vectors into Oracle Vector Database. Replicating and consolidating vector changes. Implementing multi-cloud, multi-active Oracle vector databases. Streaming text and vector changes to search engines. 
    Key considerations include that embedding models must be consistent across all vector stores for effective similarity searches. 
    10:09
    Lois: Now, many organizations wonder if they can use generative AI with their own business data. Brent, how do enterprises typically approach this?
    Brent: Organizations are using generative AI typically like this. 
    Building LLMs from scratch. Training models on proprietary data for specific tasks. Fine-tuning LLMs, adapting pre-trained models to a specific domain using private data. And prompt engineering with retrieval augmented generation or RAG. Augmenting prompts with relevant information retrieved from a knowledge base to improve the accuracy and relevance of LLM responses. 
    Now it's possible to create a real-time vector hub for GenAI. This hub can ingest real-time data from various sources, including Oracle and third-party relational databases, vector databases, third-party messaging systems, and NoSQL databases, business updates, documents, events, and alerts. 
    11:11
    Nikita: And how does the vector hub work? 
    Brent: DML and DDL changes, vector changes, and prompt or chat history are used to enrich prompts. And embedding model generates embeddings from the text data. 
    Similarity search is performed on these embeddings to retrieve relevant information from the vector hub. The retrieved information is used to augment the prompt, leading to more accurate and trustworthy answers from the LLM. Now, the benefits of real-time data and generative AI include the ability to ensure answers are based on fresh business data. And helps reduce hallucinations in generative AI responses. 
    Actionable AI and machine learning from streaming pipelines allows data from ERP and SaaS applications, databases, event messaging systems, and NoSQL databases to be ingested into streaming pipelines. This data can then be used for AI and machine learning model training, similarity searches, machine learning tasks, external AI, and machine learning integrations, alerts, and data product creation. 
    12:25
    Lois: So if you had to summarize, Brent, why does GoldenGate 23ai stand out for artificial intelligence workloads?
    Brent: Well, first up, it improves data quality for AI model training and fine-tuning. And secondly, it enhances retrieval augmented generation by providing real-time access to relevant business data, leading to more accurate and trustworthy generative AI responses. 
    Nikita: Thank you, Brent, for sharing your insights and detailing these exciting new features across Oracle's AI stack. If you'd like to dive deeper into these topics, don't forget to visit mylearn.oracle.com and look for Oracle AI Vector Search Deep Dive course. Until next time, this is Nikita Abraham…
    Lois: And Lois Houston, signing off!
    13:16
    That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
  • Oracle University Podcast

    RAG with Oracle AI Vector Search and OCI Generative AI: Python and PL/SQL Approaches

    14/04/2026 | 11min
    In this episode of the Oracle University Podcast, hosts Lois Houston and Nikita Abraham are joined by Brent Dayley, Senior Principal APEX & Apps Dev Instructor. Together, they explore how to implement Retrieval Augmented Generation (RAG) using Oracle AI Vector Search and OCI Generative AI. Brent walks listeners through the similarities and differences between building RAG workflows with Python and PL/SQL, offering practical insights into embedding creation, semantic search, and prompt engineering within Oracle's technology stack.
     
    Oracle AI Vector Search Deep Dive: https://mylearn.oracle.com/ou/course/oracle-ai-vector-search-deep-dive/144706/
    Oracle University Learning Community: https://education.oracle.com/ou-community
    LinkedIn: https://www.linkedin.com/showcase/oracle-university/
    X: https://x.com/Oracle_Edu
     
    Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode.
     
    Please note, this episode was recorded before Oracle AI Database 26ai replaced Oracle Database 23ai. However, all concepts and features discussed remain fully relevant to the latest release.
     
    --------------------------------------------
     
    Episode Transcript:

    00:00
    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!
    00:26
    Lois: Hello and welcome to another episode of the Oracle University Podcast! I'm Lois Houston, Director of Communications and Adoption Programs with Customer Success Services, and with me is Nikita Abraham, Team Lead for Editorial Services with Oracle University. 
    Nikita: Hi everyone! If you joined us last week, you'll remember we explored AI Vector Search and how Retrieval Augmented Generation, or RAG, empowers large language models by surfacing relevant business content for smarter, more context-aware answers.
    Lois: That's right, Niki. We also looked at how unstructured data gets transformed into embeddings, how these vectors power semantic search, and how Oracle Database 23ai is uniquely designed to support these advanced AI workflows.
    Nikita: Today, we're building on that foundation with an exciting double feature. We'll start with an introduction to OCI Generative AI Service and how you can use it with Python, and then dive into Retrieval Augmented Generation with Oracle AI Vector Search and the OCI Gen AI service using PL/SQL.
    01:32
    Lois: And to walk us through these topics, we're delighted to welcome back Brent Dayley, Senior Principal APEX & Apps Dev Instructor. Brent, it's great to have you. So, tell us, how does the OCI Generative AI service use Oracle AI Vector Search?
    Brent: So OCI Generative AI service allows us to take user questions and augment those using external data from outside of the large language model that allows us to return augmented content. 
    We would leverage Oracle AI Vector Search in order to retrieve contextually relevant information. And we would create prompts that have some sort of a meaning to help guide the user to input the appropriate types of questions. And this allows us to retrieve the data using a large language model. 
    02:27
    Nikita: What are the typical steps for implementing a RAG workflow using the OCI Generative AI service in Python?
    Brent: We would load the document. Transform the document to text. And then split the text into chunks. 
    So if you're talking about maybe a PDF that contains chapters, we might split the different chapters into individual chunks. We would then set up Oracle AI Vector Search and insert the embedding vectors. We would build the prompt to query the document. And then we would invoke the chain. 
    So first, you would load the text sources from a file. Open a terminal window and connect to your compute instance. And launch ipython to allow interactive work. 
    Ipython allows you to insert a series of steps in order to put different commands in different steps. You might load the source file called FAQs.
    Next, you would load the FAQ chunks into the Vector Database. You would create a connection and connect to your database. And then create the table. And then you would vectorize the text chunks and then encode the text chunks. And then insert the chunks and vectors into the database. 
    Next, you would vectorize the question. Define the SQL script ordering the results by the calculated score. Define the question. Write the retrieval code. And then execute the code. Finally, you would print the results.
    Then we would create the large language model prompt and call the AI generative LLM. Ensure that our prompt does not exceed the maximum context length of the model. And then define the prompt content. 
    We would then initialize the OCI client and then make the call. 
    04:47
    Here's some exciting news! Oracle University has training to help your teams unlock Redwood—the next-gen design system for Fusion Cloud Applications. Learn how Redwood improves your user experience and discover how to personalize your Fusion investment using Visual Builder Studio. Whatever your role, visit mylearn.oracle.com and check out these courses today! 
    05:12
    Nikita: Thanks, Brent. That gives us a nice overview of how Python can be leveraged with OCI Generative AI. Now, how would you compare working with Python for building RAG applications to using PL/SQL? Can you walk us through the high-level process for building a RAG solution in this environment?
    Brent: First, we would want to load the document. Next, we would transform the document into plain text. After that, we would take that text and split it into meaningful chunks. Next, we would go ahead and set up Oracle AI Vector Search and insert the embedding vectors. We would then build the prompt so that we can query the document. And then we would invoke all of those previous steps as our chain. 
    06:04
    Lois: OK, and can we take a closer look at each of these steps? 
    Brent: Step 1, text extraction and preparation. So, let's imagine we have some sort of document that we want to use as the augmented information. We would load that document. Next, we would transform the document to text. And we have a function in the DBMS Vector Chain Package called util to text. And this is used to extract plain text from the loaded documents. 
    Next, we would want to split the text into meaningful chunks. The DBMS Vector Chain Package has another function called util two chunks, that allows us to divide the extracted text into smaller, more manageable pieces, which we call chunks. 
    07:02
    Nikita: Once we have our text chunks ready, what's the next step to make our data searchable and useful for the large language model?
    Brent: Step number 2, we would want to go ahead and use embedding models in order to create our vectors. We would load multiple ONNX models into the database. And the reason we would do this is because models with a greater number of dimensions usually produce higher quality vector embeddings. 
    So you might want to load multiple different ONNX models into the database so that you can generate embeddings from each of the models, and then compare those vector embeddings using those different models. You would create vector embeddings using PL/SQL packages. 
    07:55
    Lois: After embeddings are created, how does the solution find the most relevant content in response to a user's question?
    Brent: Step 3, we would then go and do a similarity search so that we can return a response. We would select the text chunks that have the relevant information for the input user question based on vector search. This allows for integrating with Oracle's Gen AI Large Language Model Service to generate responses. The process ensures that the large language model generates contextually appropriate and relevant answers for those users' queries. 
    Now, step 4 is to build the prompt, and I want to stress the importance of large language model prompt engineering. What this will do is to carefully craft input queries or instructions so that we can get more accurate and desirable outputs from the large language model. 
    This allows developers to guide the LLM's behavior and tailor its responses to specific requirements. This is what we call LLM Prompt Engineering. And it allows us, as I was saying, to craft input queries or instructions so that we can create more accurate and desirable outputs. 
    Next, we would use an example interactive RAG application that uses the Streamlit framework in order to create a user-friendly interface. This interface will allow us to upload documents, pose the question, and receive relevant answers generated by the underlying RAG pipeline within the database. 
    In the final step, we will have an input prompt that asks us to ask a question about the PDF. We will then type in some sort of a question relative to the PDF content. And then we would retrieve the return data based on the input question. 
    10:11
    Nikita: Brent, thank you for walking us through both the Python and PL/SQL approaches for building RAG solutions with Oracle Generative AI. If you'd like to dive deeper into these topics, don't forget to visit mylearn.oracle.com and look for the Oracle AI Vector Search Deep Dive course. Until next time, this is Nikita Abraham…
    Lois: And Lois Houston, signing off!
    10:33
    That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
  • Oracle University Podcast

    Retrieval Augmented Generation (RAG)

    07/04/2026 | 12min
    Join hosts Lois Houston and Nikita Abraham as they explore one of the most exciting innovations in enterprise AI: Retrieval Augmented Generation (RAG) powered by Oracle AI Vector Search. In this episode, Senior Principal APEX & Apps Dev Instructor Brent Dayley walks through the fundamentals of RAG, explaining how it combines Oracle Database 23ai, vector embeddings, and large language models to deliver accurate, context-rich answers from both business and unstructured data. Discover the typical RAG workflow, practical setup steps on Oracle Cloud Infrastructure, and how to work
    with embedding models for real-world applications.
     
    Oracle AI Vector Search Deep Dive: https://mylearn.oracle.com/ou/course/oracle-ai-vector-search-deep-dive/144706/
    Oracle University Learning Community: https://education.oracle.com/ou-community
    LinkedIn: https://www.linkedin.com/showcase/oracle-university/
    X: https://x.com/Oracle_Edu
     
    Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode.
     
    Please note, this episode was recorded before Oracle AI Database 26ai replaced Oracle Database 23ai. However, all concepts and features discussed remain fully relevant to the latest release.
     
    ----------------------------------------------
     
    Episode Transcript

    00:00
    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!
    00:26
    Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead: Editorial Services with Oracle University, and joining me is Lois Houston, Director of Communications and Adoption Programs with Customer Success Services.
    Lois: Hi everyone! If you've been with us this season, you'll know we've already covered a lot about Oracle AI Vector Search. In Episode 1, we introduced the core concepts—how vectors let you search by meaning, not just keywords, and how embedding models translate your unstructured data into a searchable format inside Oracle Database 23ai. 
    Nikita: Then, in Episode 2, we took a deeper dive into how these vectors are actually stored and managed. We explored the different types of vector indexes, similarity metrics, and best practices for designing and optimizing your database for semantic search. 
    Lois: Right. Today, we're shifting gears into one of the most exciting real-world applications: Retrieval Augmented Generation, or RAG. You'll learn how RAG combines the power of Oracle AI Vector Search with large language models to answer natural language questions using both business and unstructured data. 
    01:39
    Nikita: We'll walk through the workflow, highlight why Oracle Database is uniquely suited for RAG, and give you the essential steps to get started. Back again is Senior Principal APEX & Apps Dev Instructor Brent Dayley. Hi Brent! Could you explain what RAG is, and why it's important for working with AI and large language models?
    Brent: Well, RAG stands for Retrieval Augmented Generation. And this is a technique that allows us to enhance the capabilities of large language models, also known as LLMs, and this provides them with relevant context from external knowledge sources. This will allow the LLMs to generate more accurate, informative, and context-aware responses. Real world applications include answering questions, chatbot development, content summarization, and knowledge discovery. 
    02:35
    Lois: Brent, what makes Oracle Database 23ai a good platform for implementing RAG workflows?
    Brent: Now, there are some key advantages of using Oracle Database 23ai as a RAG platform. These include native functionality, allowing built-in tools and packages specifically designed for RAG pipeline development. 
    Also, if you are a PL/SQL developer, then this will allow you to develop within a familiar and robust database environment. Also, Oracle has a plethora of security and performance tools. And this ensures enhanced security and optimized performance. 
    03:18
    Nikita: What does a typical RAG workflow look like in Oracle Database 23ai? What are the main steps involved?
    Brent: Now, the primary workflow steps are going to be to generate vector embeddings from your unstructured data. You do this using vector embedding models. And you can generate those embeddings either inside or outside of the database. 
    Next, you need to store the vector embeddings, the unstructured data, and the relational business data, and you can store all of that in the Oracle Database. You might want to also create vector indexes that can allow you to run similarity searches over huge vector spaces with really good performance. 
    Finally, you need to query data with similarity searches. You can use Oracle AI Vector Search native SQL operations to combine similarity with relational searches to retrieve relevant data. And optionally, you can generate a prompt and send it to a large language model for full RAG inference. 
    04:30
    Lois: Can you give us an example of how this workflow operates in practice?
    Brent: A user's natural language question is encoded as a vector and sent to AI Vector Search. Next, AI vector search finds private content, such as documents, that are stored in the database, and those will match the user's question. The content is then sent to Oracle's GenAI service to help answer the user's question. And then GenAI uses the content plus general knowledge to provide an informed answer back to the user. 
    05:14
    Nikita: What does the overall user experience look like when interacting with RAG? How does Oracle ensure the answers are both accurate and up to date?
    Brent: In this case, we have a chatbot. This is the interface that we usually use to enable dialogue with the large language model. Now, in order to improve the quality of the answers, we want to search your private business data, and that allows us to pass the most relevant facts back to the LLM. 
    Next, we want to format the similarity search results as a prompt and context for the large language model. Now, this will allow us to use up to date facts as input to LLMs. And that will minimize the probability of the LLM hallucinating. And those high-quality responses are then returned back to the chatbot. 
    06:12
    Lois: Brent, what does the setup process look like for getting RAG up and running with Oracle AI Vector Search on OCI? Can you take us through the main steps?
    Brent: First, you will log into OCI. Provide your cloud account name and click Next. There are also interfaces for signing in using a traditional cloud account. And if you're not an Oracle Cloud customer yet, you can also sign up using this page. 
    Next, after signing in, you will create a compute instance. And you will use Oracle Infrastructure Cloud Console in order to do this. And you will wind up with the user called OPC. You'll notice that you're using SSH in order to connect to your compute instance, and you're running a script in order to set up the Oracle Database. 
    After that, you will set up the Python environment, again using SSH to connect as an OPC user to your compute instance. 
    07:22
    Do you want to optimize your implementation strategies? Check out the Oracle Fusion Cloud Applications Process Essentials training and certifications for insight into key processes and efficiencies across every phase of your Fusion Cloud Apps journey. Learn more at mylearn.oracle.com. 
    07:43
    Nikita: Welcome back! So far, we've seen how Oracle AI Vector Search powers RAG, letting you surface relevant business knowledge for large language models and enhance their answers. At the heart of all this is the process of transforming unstructured data, like text or documents, into mathematical representations called embeddings. 
    Lois: Those embeddings are what make meaningful, semantic search possible. But have you wondered how those embeddings actually get created, or what goes on behind the scenes when you choose an embedding model? 
    Nikita: Up next, we'll take a closer look at embedding models themselves: what they are, how to use them inside Oracle Database 23ai, and how you can experiment with different models to get the results that best fit your business needs. 
    Lois: We'll walk through importing models, generating embeddings, and even how you can swap out embedding models to compare results. But before we get into the nitty-gritty details, let's quickly recap embedding models, since we've mentioned them in our previous episodes. 
    08:47
    Nikita: Brent, for listeners who might need a refresher, can you explain what embedding models are and why they're so central to AI Vector Search? 
    Brent: AI Vector Search is based on similarity properties. You can search data by semantic similarity rather than by the actual values. Vector embeddings are created by embedding models to represent the unstructured data. So we have input data. 
    What we'll want to do is to use an embedding model to generate vector embeddings. And then the vector embeddings would be stored inside of a vector column in a table. We would then compare those vectors to each other using vector distance function. 
    And we would get the relevant content back based on the number of returns that we describe. For instance, maybe we want to bring back the five closest pieces of data compared to the input data. 
    There is a new function that allows you to generate vector embeddings that is called the vector embedding function. It allows you to generate vectors within the database. 
    10:08
    Lois: Can you walk us through the practical steps for using embedding models with Oracle AI Vector Search?
    Brent: In order to create and set up a table, we might use the Python program called create_schema.py. And that will allow us to create a table. 
    We would ensure that the table was successfully created with the data. As an example, I would create a table called MY_DATA. Next, we would use a sentence transformers embedding model in order to vectorize the table. We can use the Python program, vectorize_table_SentenceTransformers.py. We would then query the MY_DATA table in the Oracle Database to verify that the data has been updated. 
    And then we would use sentence transformers in order to perform the similarity search. The Python program is called similarity_search_SentenceTransformers.py And what that would do is create the table and then perform a similarity search using the sentence transformers. Now what if you decide that you want to maybe change embedding models? Maybe you want to compare the results by using one particular model as compared to a different model. 
    So you can change the embedding model. And in order to do that, you would change the embedding model in both of the programs and re-vectorize the table using the vectorize_table_SentenceTransformers.py program. You would then use the new model with different words, possibly, and then compare and review the results, and then choose which one gets you back the data that you're looking for that is most similar. 
    12:02
    Nikita: Well, that's a wrap on this episode. A big thank you, Brent, for sharing your expertise with us. 
    Lois: If you want to learn more about the topics we discussed today, visit to mylearn.oracle.com and search for the Oracle AI Vector Search Deep Dive course. Until next time, this is Lois Houston…
    Nikita: And Nikita Abraham, signing off!
    12:25
    That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Mais podcasts de Ensino

Sobre Oracle University Podcast

Oracle University Podcast delivers convenient, foundational training on popular Oracle technologies such as Oracle Cloud Infrastructure, Java, Autonomous Database, and more to help you jump-start or advance your career in the cloud.
Site de podcast

Ouça Oracle University Podcast, Inglês do Zero e muitos outros podcasts de todo o mundo com o aplicativo o radio.net

Obtenha o aplicativo gratuito radio.net

  • Guardar rádios e podcasts favoritos
  • Transmissão via Wi-Fi ou Bluetooth
  • Carplay & Android Audo compatìvel
  • E ainda mais funções

Oracle University Podcast: Podcast do grupo

Informação legal
Aplicações
Social
v8.8.14| © 2007-2026 radio.de GmbH
Generated: 5/6/2026 - 6:45:43 PM