{"id":104634,"date":"2024-11-21T16:30:36","date_gmt":"2024-11-21T16:30:36","guid":{"rendered":"https:\/\/www.red-gate.com\/simple-talk\/?p=104634"},"modified":"2024-11-21T16:30:38","modified_gmt":"2024-11-21T16:30:38","slug":"breaking-down-ignite-a-pragmatic-look-at-the-conferences-best-innovations","status":"publish","type":"post","link":"https:\/\/www.red-gate.com\/simple-talk\/resources\/conferences\/breaking-down-ignite-a-pragmatic-look-at-the-conferences-best-innovations\/","title":{"rendered":"Breaking Down Ignite: A Pragmatic Look at the Conference\u2019s Best Innovations"},"content":{"rendered":"\n<p>The amount of news brought by Ignite is huge. I was not expecting to find so many new features, resources, and discoveries.<\/p>\n\n\n\n<p>There may be dozens of summaries of the conference, and each one focusing on different highlights and news. On this article I will summarize what seems to be the most groundbreaking changes but bringing a pragmatic view of each one of them.<\/p>\n\n\n\n<p>The new announcements are not simple features. I have provided some definitions and references in case you need to make some additional references to understand the details at the end of this article, including some presentations that I have given on AI and ML in Azure.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-sql-server-2025\">SQL Server 2025<\/h2>\n\n\n\n<p>The simple fact SQL Server 2025 has been announced is great. Its big highlight is that it has been announced as an AI focused database platform. The question from everyone: <em>What\u2019s that?<\/em><\/p>\n\n\n<div class=\"block-core-list\">\n<ul class=\"wp-block-list\">\n<li>It supports a native vector type<\/li>\n\n\n\n<li>It makes vector search<\/li>\n\n\n\n<li>It integrates with external models<\/li>\n<\/ul>\n<\/div>\n\n\n<p>There is many more news, but this alone is already incredible.<\/p>\n\n\n\n<p>Here you may notice the CoPilot generating the code to create a model definition in SQL Server.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"838\" height=\"851\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2024\/11\/a-screenshot-of-a-computer-description-automatica-4.png\" alt=\"A screenshot of a computer\n\nDescription automatically generated\" class=\"wp-image-104636\"\/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>Making an external call to generate embeddings:<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1199\" height=\"399\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2024\/11\/a-screenshot-of-a-computer-program-description-au.png\" alt=\"A screenshot of a computer program\n\nDescription automatically generated\" class=\"wp-image-104637\"\/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>Making a vector distance search using the native vector type:<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"2500\" height=\"1406\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2024\/11\/word-image-104634-3.png\" alt=\"\" class=\"wp-image-104638\"\/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>Another feature, a natural language query to find content in a database:<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1231\" height=\"796\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2024\/11\/a-screenshot-of-a-computer-description-automatica-5.png\" alt=\"A screenshot of a computer\n\nDescription automatically generated\" class=\"wp-image-104639\"\/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>Mind how no product description contains \u201cwork out\u201d, but the AI model and vector search identified products which are interesting for people who like to work out.<\/p>\n\n\n\n<p>This is not <a href=\"https:\/\/en.wikipedia.org\/wiki\/Retrieval-augmented_generation\">RAG<\/a> (Retrieval-augmented generated), or at least, I would not call it RAG (in the references is a video discussing this topic). As I explained in the session I delivered, RAG is for unstructured documents, knowledge management. This is vector search used for database queries, it\u2019s something better.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-tying-it-together\">Tying it together<\/h3>\n\n\n\n<p>This is how all these features can be connected:<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1854\" height=\"1023\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2024\/11\/a-diagram-of-a-software-development-description-a.png\" alt=\"A diagram of a software development\n\nDescription automatically generated\" class=\"wp-image-104640\"\/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>Of course, there is much more news about want is coming in SQL Server 2025:<\/p>\n\n\n<div class=\"block-core-list\">\n<ul class=\"wp-block-list\">\n<li>JSON support<\/li>\n\n\n\n<li>Manage Identity support<\/li>\n\n\n\n<li>Full integration with Fabric<\/li>\n<\/ul>\n<\/div>\n\n\n<p>And much more. (<a href=\"https:\/\/ignite.microsoft.com\/en-US\/sessions\/Studio09\">Here is a quick video overview from Ignite!<\/a>)<\/p>\n\n\n\n<p><strong>Links:<\/strong><\/p>\n\n\n\n<p>SQL Server 2025 Private Preview: <a href=\"https:\/\/aka.ms\/sqleapsignup\">https:\/\/aka.ms\/sqleapsignup<\/a><\/p>\n\n\n\n<p>CoPilot in SSMS Private Preview: <a href=\"https:\/\/aka.ms\/ssmsinterest\">https:\/\/aka.ms\/ssmsinterest<\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-fabric-databases-azure-sql\">Fabric Databases \u2013 Azure SQL<\/h2>\n\n\n\n<p>Fabric Database is considered the biggest news in Microsoft Fabric since it was released, one year ago.<\/p>\n\n\n\n<p>In summary, this is the capability to create an Azure SQL Database inside Microsoft Fabric. Instead of managing the database in Azure, we manage it in Microsoft Fabric.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"612\" height=\"622\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2024\/11\/a-screenshot-of-a-computer-description-automatica-6.png\" alt=\"A screenshot of a computer\n\nDescription automatically generated\" class=\"wp-image-104641\"\/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>All the features in Azure SQL are available. You can use Database Projects, OLTP and more.<\/p>\n\n\n\n<p>The actual database is not in Microsoft Fabric, but on Azure. However, this is completely transparent to the end user. The difference between creating an Azure SQL database in Azure and in Microsoft Fabric is exactly this: In Azure, you need to manage the infrastructure, such as the service tier of the database. In Fabric, everything is done for you.<\/p>\n\n\n\n<p>The image below represents how this database is internally managed:<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"501\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2024\/11\/a-diagram-of-a-software-application-description-a.png\" alt=\"A diagram of a software application\n\nDescription automatically generated\" class=\"wp-image-104642\"\/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>As you may notice in the image above, there is an automatic mirroring of the data from Azure SQL to the OneLake.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-what-does-this-mean\">What does this mean?<\/h3>\n\n\n\n<p>This means the objects in Fabric will be able to make shortcuts and access the data from the Azure SQL database without affecting anything in relation to the applications making access to the OLTP database. The applications use the data in Azure SQL, while the objects in Fabric use the data in OneLake and this entire process is transparent for the end user.<\/p>\n\n\n\n<p>There are many pending questions of course. Here are some from me:<\/p>\n\n\n<div class=\"block-core-list\">\n<ul class=\"wp-block-list\">\n<li>Is the auto-management made by Fabric a good reason to use this feature, instead of manually managing the database in Azure and using the mirroring feature?<\/li>\n\n\n\n<li>Will Fabric be using the deployment option which generates the most savings for the user?<\/li>\n<\/ul>\n<\/div>\n\n\n<p>There are already many blogs about Fabric Database, here are some reference links:<\/p>\n\n\n<div class=\"block-core-list\">\n<ul class=\"wp-block-list\">\n<li>Olivier Van Steenlandt: <a href=\"https:\/\/community.fabric.microsoft.com\/t5\/Databases-Community-Blog\/Databases-in-Fabric-7-Quickstart-tips\/ba-p\/4287591\">https:\/\/community.fabric.microsoft.com\/t5\/Databases-Community-Blog\/Databases-in-Fabric-7-Quickstart-tips\/ba-p\/4287591<\/a><\/li>\n\n\n\n<li>Nicola Ilic: <a href=\"https:\/\/data-mozart.com\/fabric-sql-database-what-why-and-how\/\">https:\/\/data-mozart.com\/fabric-sql-database-what-why-and-how\/<\/a><\/li>\n\n\n\n<li>Kevin Chant: <a href=\"https:\/\/www.kevinrchant.com\/2024\/11\/19\/spreading-your-sql-server-wings-with-sql-database-in-fabric\/\">https:\/\/www.kevinrchant.com\/2024\/11\/19\/spreading-your-sql-server-wings-with-sql-database-in-fabric\/<\/a><\/li>\n<\/ul>\n<\/div>\n\n\n<h2 class=\"wp-block-heading\" id=\"h-azure-ai-foundry-and-agents-everywhere\">Azure AI Foundry and Agents Everywhere<\/h2>\n\n\n\n<p>Azure AI Foundry is a new central AI feature in Azure. AI Foundry studio replaces AI Studio, for example.<\/p>\n\n\n\n<p>The image below shows how Azure AI Foundry fits together in the AI ecosystem:<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1048\" height=\"578\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2024\/11\/microsoft-unveils-azure-ai-foundry-for-unified-ent.jpeg\" alt=\"Microsoft Unveils Azure AI Foundry for Unified Enterprise AI Solutions -  WinBuzzer\" class=\"wp-image-104643\"\/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>Ok, just a rebranding, right?<\/p>\n\n\n\n<p>The new astonishing feature is the Agents. We should have seen it coming, I confess I missed it.<\/p>\n\n\n\n<p>I delivered many talks about RAG and about Machine Learning Prompt Flow, this one capable to orchestrate multiple models, RAG or not, in order to create a CoPilot capable to decide which model is the best to answer the user question.<\/p>\n\n\n\n<p><em>What\u2019s an Agent in this scenario?<\/em><\/p>\n\n\n\n<p>The best way to describe an Agent is like a no code solution for an orchestration workflow. The Agent is basically ready to use.<\/p>\n\n\n\n<p>When you decide to create an agent, you choose what to add to the agent:<\/p>\n\n\n<div class=\"block-core-list\">\n<ul class=\"wp-block-list\">\n<li>You can add a RAG solution, allowing the agent to search the company documents<\/li>\n\n\n\n<li>You can allow internet search.<\/li>\n\n\n\n<li>You can ground the internet search to specific sites.<\/li>\n\n\n\n<li>You can add Azure Functions as actions the agent will be capable of executing. For example, the user can ask the agent to book a flight ticket.<\/li>\n<\/ul>\n<\/div>\n\n\n<p>This is an ultimate orchestration flow in a way I was not expecting. Of course, it\u2019s impossible to cover absolutely all scenarios. ML Prompt flow will still be needed, but it will be left as edge cases, and not a main solution anymore.<\/p>\n\n\n\n<p>The image below gives a better idea about the combination of capabilities: <br><img loading=\"lazy\" decoding=\"async\" width=\"1806\" height=\"967\" class=\"wp-image-104644\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2024\/11\/a-screenshot-of-a-computer-description-automatica-7.png\" alt=\"A screenshot of a computer\n\nDescription automatically generated\"><\/p>\n\n\n\n<p>The Agent created in AI Foundry will be published by the <strong>AI Agent Service<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1656\" height=\"870\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2024\/11\/a-close-up-of-a-computer-screen-description-autom.png\" alt=\"A close-up of a computer screen\n\nDescription automatically generated\" class=\"wp-image-104645\"\/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>Apply for the Agents preview: <a href=\"https:\/\/aka.ms\/azureagents-apply\">https:\/\/aka.ms\/azureagents-apply<\/a><\/p>\n\n\n\n<p>AI App Templates Gallery: <a href=\"https:\/\/aka.ms\/aiapps\">https:\/\/aka.ms\/aiapps<\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-azure-sql-news\">Azure SQL News<\/h2>\n\n\n\n<p>Among many new features, two of them caught my attention:<\/p>\n\n\n<div class=\"block-core-list\">\n<ul class=\"wp-block-list\">\n<li>Vector Support<\/li>\n\n\n\n<li>JSON native data type<\/li>\n<\/ul>\n<\/div>\n\n\n<p>The features are not only like the ones expected for SQL Server 2025, but the integrity among SQL language in different environments, from on-premises to Fabric, was highlighted in many sessions. The image below illustrates this.<\/p>\n\n\n\n<figure class=\"wp-block-image is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1880\" height=\"963\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2024\/11\/a-screenshot-of-a-computer-description-automatica-8.png\" alt=\"A screenshot of a computer\n\nDescription automatically generated\" class=\"wp-image-104646\" style=\"width:661px;height:auto\"\/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>The only difference I could notice with the vector support is the lack of models. There is no model object in Azure SQL. Any call to models needs to be a \u201cmanual\u201d use of the stored procedure <code>sp_invoke_external_endpoint<\/code> to call an external rest API.<\/p>\n\n\n\n<p>The images below illustrate parts of this capability to reach a similar result as the one illustrated with SQL Server 2025:<\/p>\n\n\n\n<figure class=\"wp-block-image is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1142\" height=\"555\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2024\/11\/a-screenshot-of-a-computer-description-automatica-9.png\" alt=\"A screenshot of a computer\n\nDescription automatically generated\" class=\"wp-image-104647\" style=\"width:631px;height:auto\"\/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>The vector distance calculation is already supported in Azure SQL:<\/p>\n\n\n\n<figure class=\"wp-block-image is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1908\" height=\"886\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2024\/11\/a-screenshot-of-a-computer-description-automatica-10.png\" alt=\"A screenshot of a computer\n\nDescription automatically generated\" class=\"wp-image-104648\" style=\"width:624px;height:auto\"\/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>This is already advancing: A <strong>LangChain<\/strong> package called <code>langchain-sqlserver<\/code> package was already made available. You can read more about this <a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/langchain-with-sqlvectorstore\/\">new integration between langchain and azure sql<\/a>.<\/p>\n\n\n\n<p>Only a week before a video was published on the series Data Exposed explaining <a href=\"https:\/\/www.youtube.com\/watch?v=HAu2APLuj_8\">how to create an application using semantic kernel with Azure SQL<\/a>. I could not confirm yet if this sample is already using the new langchain-sqlserver library or not, but <a href=\"https:\/\/www.youtube.com\/redirect?event=video_description&amp;redir_token=QUFFLUhqbFd0bmpkTWVpUUxfSkFaTExlMkw5a1VyVWRud3xBQ3Jtc0ttQkh0Ui05OVlhMGdfaDdxTE44U3pwMXE4ekxJZV9LemcxZE1JRUNZTV9ELThWRHBMOEs4V2hGUURaRWFidFVxT212dmhTUTRmeW8xZ3p1WkdKbGo5OVFvRkZqOHpYaldqeHVLajBoeFlMWmFWN09jYw&amp;q=https%3A%2F%2Fgithub.com%2FAzure-Samples%2Fazure-sql-db-chat-sk&amp;v=HAu2APLuj_8\">the code is available on github<\/a>.<\/p>\n\n\n\n<p>Considering all the new features, a <a href=\"https:\/\/aka.ms\/sql\/dev\/path\">new applied skill was made available for everyone<\/a> to learn the new skills.<\/p>\n\n\n\n<figure class=\"wp-block-image is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1806\" height=\"946\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2024\/11\/a-screenshot-of-a-computer-description-automatica-11.png\" alt=\"A screenshot of a computer\n\nDescription automatically generated\" class=\"wp-image-104649\" style=\"width:631px;height:auto\"\/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>Native Vector Support for Azure SQL Managed Instance Private Preview: <a href=\"https:\/\/aka.ms\/azuresql-vector-eap\">https:\/\/aka.ms\/azuresql-vector-eap<\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-vectors-vectors-everywhere\">Vectors, vectors everywhere<\/h2>\n\n\n\n<p>Vector support is everywhere. SQL Server, Azure SQL, CosmoDB, PostGre and Redis.<\/p>\n\n\n\n<p>It\u2019s difficult to answer the main question this creates:<\/p>\n\n\n\n<p><em>Why should we use one of these solutions instead of another, especially instead of AI Search, the most common vector storage at the moment?<\/em><\/p>\n\n\n\n<p>The speakers on each platform handle this question in different ways. Let\u2019s analyze it:<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-sql-server\">SQL Server<\/h3>\n\n\n\n<p>They don\u2019t touch the subject but show by examples: the solutions are focused on database. They are not exactly vectors for a RAG solution. They could be used for it, but this may not be the main purpose.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-azure-sql\">Azure SQL<\/h3>\n\n\n\n<p>Besides the same as SQL Server, the examples provided highlight scenarios using semantic kernel without an AI Model. Some of the examples seem to be using something similar to what\u2019s already being called NL2SQL: Natural Language to SQL.<\/p>\n\n\n\n<p>In other words, a different way to search a database with natural language without even using a LLM model in some cases.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-cosmosdb\">CosmosDB<\/h3>\n\n\n\n<p>The CosmosDB speaker was explicit about that: If you are handling unstructured documents which don\u2019t change too often (PDFs, for example), use AI Search. In other words, use AI Search for RAG.<\/p>\n\n\n\n<p>CosmosDB is for advanced database search using vector features. The same position as the SQL team, but in this case was explicit in the session.<\/p>\n\n\n<div class=\"block-core-list\">\n<ul class=\"wp-block-list\">\n<li>DisANN CosmosDB White Paper: <a href=\"https:\/\/aka.ms\/DiskANNCosmosDBWhitePaper\">https:\/\/aka.ms\/DiskANNCosmosDBWhitePaper<\/a><\/li>\n\n\n\n<li>RAG Solution Accelerator using CosmosDB: <a href=\"https:\/\/aka.ms\/doc2cdb\">https:\/\/aka.ms\/doc2cdb<\/a><\/li>\n\n\n\n<li>Azure CosmosDB Samples Gallery: <a href=\"https:\/\/aka.ms\/AzureCosmosDB\/Gallery\">https:\/\/aka.ms\/AzureCosmosDB\/Gallery<\/a><\/li>\n<\/ul>\n<\/div>\n\n\n<h2 class=\"wp-block-heading\" id=\"h-redis-cache\">Redis Cache<\/h2>\n\n\n\n<p>This was one of the most impressive sessions. The speaker not only tried to justify the use of Redis in favor of other options, but also proposed Redis as a new piece of existing architecture.<\/p>\n\n\n\n<p>Redis is capable of being a Semantic Cache. A regular solution should always send the user question to the model, but this is expensive. Instead of sending the question to the model, using Redis it\u2019s possible to make a vector search and discover if any similar question was asked before. In this case, the same answer is provided without calling the model.<\/p>\n\n\n\n<p>This is an example of Redis Cache being used as Semantic Cache for a solution with multiple models:<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1111\" height=\"610\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2024\/11\/a-screen-shot-of-a-computer-description-automatic.png\" alt=\"A screen shot of a computer\n\nDescription automatically generated\" class=\"wp-image-104650\"\/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>It\u2019s not only an idea: There are specific libraries for the semantic cache implementation. The code below is a small example:<\/p>\n\n\n\n<figure class=\"wp-block-image is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1140\" height=\"629\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2024\/11\/a-screen-shot-of-a-computer-code-description-auto.png\" alt=\"A screen shot of a computer code\n\nDescription automatically generated\" class=\"wp-image-104651\" style=\"width:629px;height:auto\"\/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>Managed Redis Preview: <a href=\"https:\/\/aka.ms\/igniteredis\">https:\/\/aka.ms\/igniteredis<\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-github-models-marketplace\">GitHub Models Marketplace<\/h2>\n\n\n\n<p>Is this an announcement, or am I the last one to know about it? GitHub has an <a href=\"https:\/\/github.com\/marketplace?type=models&amp;category=multimodal\">AI model marketplace<\/a> with an interesting UI allowing you to compare different models in a very easy way.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1333\" height=\"678\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2024\/11\/a-screenshot-of-a-computer-description-automatica-12.png\" alt=\"A screenshot of a computer\n\nDescription automatically generated\" class=\"wp-image-104652\"\/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>On the image below I made a comparison between two models. Once submitting one request, the two models execute it at the same time.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1890\" height=\"877\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2024\/11\/a-screenshot-of-a-computer-screen-description-aut.png\" alt=\"A screenshot of a computer screen\n\nDescription automatically generated\" class=\"wp-image-104653\"\/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-summary\">Summary<\/h2>\n\n\n\n<p>Of course, this is only the tip of the iceberg. There are lots of new announcements during Ignite, these are only the ones which caught my attention in a special way.<\/p>\n\n\n\n<p>You can check the <a href=\"https:\/\/news.microsoft.com\/ignite-2024-book-of-news\/\">Ignite Book of News<\/a> for more details on all the exciting stuff being announced!<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-references\">References<\/h2>\n\n\n\n<p><strong>Vector Search:<\/strong> A technique which involves transforming a content in a mathematical vector allowing to search the content by similarity. Especially used in AI architectures, especially <a href=\"https:\/\/en.wikipedia.org\/wiki\/Retrieval-augmented_generation\">RAG<\/a> (Retrieval-augmented generation).<\/p>\n\n\n\n<p>Link: <a href=\"https:\/\/www.youtube.com\/watch?v=TSh2m8p-kmQ&amp;list=PLNbt9tnNIlQ5pVwZFRVpoBG8uQTs8aIcz&amp;index=3&amp;t=466s\">Malta Tech Talks #5 (AI RAG architecture in Azure: What it is and what it really is)<\/a><\/p>\n\n\n\n<p><strong>RAG:<\/strong> Architecture used to allow a Large Language Model to answer queries based on indexed documents<\/p>\n\n\n\n<p>Link: <a href=\"https:\/\/www.youtube.com\/watch?v=Wn555vx6cNs&amp;list=PLNbt9tnNIlQ5pVwZFRVpoBG8uQTs8aIcz&amp;index=8&amp;t=1s\">How to Build Your #RAG Solution with #Azure and #OpenAI by Dennes Torres &#8211; OnLine &#8211; #DataAISF<\/a><\/p>\n\n\n\n<p><strong>ML Prompt Flow:<\/strong> A Machine Learning tool in Azure to create a workflow orchestration among multiple AI solutions<\/p>\n\n\n\n<p>Link: <a href=\"https:\/\/www.youtube.com\/watch?v=iXD1SZE8pcI&amp;list=PLNbt9tnNIlQ5pVwZFRVpoBG8uQTs8aIcz&amp;index=3\">Machine Learning Prompt Flow: The Copilot&#8217;s King by Dennes Torres &#8211; OnLine #DataAISF<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The amount of news brought by Ignite is huge. I was not expecting to find so many new features, resources, and discoveries. There may be dozens of summaries of the conference, and each one focusing on different highlights and news. On this article I will summarize what seems to be the most groundbreaking changes but&#8230;&hellip;<\/p>\n","protected":false},"author":50808,"featured_media":104635,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[159205,53,159107],"tags":[],"coauthors":[6810],"class_list":["post-104634","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-conferences","category-featured","category-news"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/104634","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/users\/50808"}],"replies":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/comments?post=104634"}],"version-history":[{"count":2,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/104634\/revisions"}],"predecessor-version":[{"id":104656,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/104634\/revisions\/104656"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/media\/104635"}],"wp:attachment":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/media?parent=104634"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/categories?post=104634"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/tags?post=104634"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/coauthors?post=104634"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}