{"id":106911,"date":"2025-07-01T04:17:00","date_gmt":"2025-07-01T04:17:00","guid":{"rendered":"https:\/\/www.red-gate.com\/simple-talk\/?p=106911"},"modified":"2026-05-08T08:57:06","modified_gmt":"2026-05-08T08:57:06","slug":"unlocking-infor-cloudsuite-data-making-the-most-of-it-with-stream-pipelines","status":"publish","type":"post","link":"https:\/\/www.red-gate.com\/simple-talk\/business-intelligence\/unlocking-infor-cloudsuite-data-making-the-most-of-it-with-stream-pipelines\/","title":{"rendered":"Unlocking Infor CloudSuite data: Making the most of it with Stream Pipelines"},"content":{"rendered":"\n<p>In my experiences building end-to-end analytics solutions, I often need to connect to a variety of different systems to bring data in. Most of the time, this process is straightforward. However, there are cases where I\u2019ve had to jump through a few hoops\u2014such as using staged databases, secured connections, or other mechanisms\u2014enforced by the organization depending on their policies.<\/p>\n\n\n\n<p>Enterprise Resource Planning (ERP) systems are some of the most common sources I connect to for analytics. They\u2019re at the heart of most business operations, spanning multiple functions, which makes them a critical source of data for any analytics effort. Several of my recent projects involved connecting to Infor\u2019s ERP systems\u2014specifically M3, and more recently, their <a href=\"https:\/\/www.infor.com\/products\/cloud-strategy\">CloudSuite<\/a> platform. Connecting to their on-premises applications (such as <a href=\"https:\/\/www.infor.com\/solutions\/erp\/m3\">M3<\/a>, <a href=\"https:\/\/www.infor.com\/solutions\/erp\/ln\">LN<\/a>, etc.) to extract data is usually straightforward if access to the database can be obtained, whereas connecting to CloudSuite to extract data is a little bit more challenging, even with the connectivity options provided by Infor; a sentiment shared by many customers themselves, which has led to a notion that Infor data is pretty closed off.<\/p>\n\n\n\n<p>Infor\u2019s Data Fabric, with its data lake and the anticipated lakehouse (at the time of writing), and Birst seems promising on paper, but it still falls short of delivering the right tools for building a true end-to-end analytics solution. On top of that, consistent issues with connecting to their own ERP for analysis, combined with a lack of clear guidance on how to get it all working, only add to the challenges.<\/p>\n\n\n\n<p>This article aims to help make things clearer and easier to work with when dealing with Infor\u2019s CloudSuite when you need to pull data out for your data &amp; AI initiatives.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-infor-concepts\">Infor concepts<\/h2>\n\n\n\n<p>Before diving in, let\u2019s familiarize ourselves with Infor\u2019s offerings that are relevant to this article.<\/p>\n\n\n<div class=\"block-core-list\">\n<ul class=\"wp-block-list\">\n<li><strong>Infor CloudSuite<\/strong> is a collection of business applications offered as a SaaS solution\u2014including products like M3, LN, and others. It\u2019s similar to how Microsoft offers Dynamics 365 as a unified cloud-based offering for its business apps. <\/li>\n\n\n\n<li><strong>Infor OS<\/strong> (Operating Services) is the core platform that provides a unified pane of glass that includes CloudSuite, integration, automation, security, governance, and a data fabric as a digital transformation platform. <\/li>\n\n\n\n<li><strong>Infor Data Fabric<\/strong> is the unified approach to data management \u2014bringing together data lake storage with the tools to process, explore and serve data for reporting and analytics. <\/li>\n<\/ul>\n<\/div>\n\n\n<h2 class=\"wp-block-heading\" id=\"h-the-problem\">The problem<\/h2>\n\n\n\n<p>As an organization, you capture and store data to support day-to-day operations, gain visibility into what\u2019s happening across the business, and make more informed decisions. But to really unlock its value, you often need to extract that data, process it effectively, and use it beyond what the application offers, especially when it comes to analytics and strategic decision-making.<\/p>\n\n\n\n<p>I\u2019ve worked with many customers who lean heavily on data from their ERP systems, Infor or otherwise, for their daily operational insights and strategic decision making. While that is a natural starting point, the full value often emerges when data from across the business is brought together and mixed and matched effectively with ERP data. No business runs on just one system. Organizations have sales targets floating in Excel, customer data living in Dynamics 365, eCommerce humming away in Shopify, maybe even an old Oracle-based ERP still hanging on in the background. Bringing all that data together creates a more complete, connected view of the business, something no single system can provide on its own.<\/p>\n\n\n\n<p>If I were building a solid, scalable, and user-friendly analytics solution for a customer, I\u2019d need the right mix of tools and tech to make it happen. What stands out about Infor Data Fabric is that it\u2019s really geared toward enabling access to data from Infor\u2019s own ecosystem, not so much for bringing data together from external systems. That feels a little at odds with their broader vision of a data fabric built around a data lake, with a lakehouse on the horizon (at the time of writing).<\/p>\n\n\n\n<p>I\u2019ve also been seeing more and more customers leaning toward Power BI for their reporting and analytics needs, largely because Birst, where the data ultimately needs to go to, doesn\u2019t quite deliver what they\u2019re looking for. But even then, the challenge of getting data out of Infor remains. The reality is, when you&#8217;re pulling in data from all kinds of systems\u2014on-prem, cloud, SaaS apps, databases, files, structured or not\u2014you need the right tools to extract, process, store, explore, and present that data. And if I don\u2019t have the flexibility to do all of that, in ways that fit my needs, I\u2019m going to look at a stack that does\u2014like Azure, AWS, or GCP.<\/p>\n\n\n\n<p>If unlocking the full value of your data is the goal, then choosing an open, flexible platform that plays well with all your systems isn\u2019t just a nice-to-have, it\u2019s a strategic must.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-the-solution\">The solution<\/h2>\n\n\n\n<p>Over the years, Infor has rolled out a few different ways to extract data from its ERP systems, but in practice, they\u2019ve often fallen short when it comes to reliability and performance. Take the Compass API, for instance. While it works for small requests such as when you build a user interface extension and use it to interact with M3, it doesn\u2019t hold up well when you use it to extract large datasets from the ERP. <\/p>\n\n\n\n<p>Then there\u2019s the Pentaho-based \u201cETL tool\u201d, which has to be first deployment across two virtual machines that pushes data extracted from the M3 to a SQL Server instance on of the VMs, a setup that has regularly encounters data mismatches against the source while often having a high latency. Even the data lake on Infor\u2019s Data Fabric, which is seen as a more modern option, has its challenges with regular latency problems and occasional inconsistencies that make it less dependable \u2013 so, more problems than a sound solution.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1385\" height=\"922\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/word-image-106911-1.png\" alt=\"\" class=\"wp-image-106912\" srcset=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/word-image-106911-1.png 1385w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/word-image-106911-1-300x200.png 300w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/word-image-106911-1-1024x682.png 1024w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/word-image-106911-1-768x511.png 768w\" sizes=\"auto, (max-width: 1385px) 100vw, 1385px\" \/><\/figure>\n\n\n\n<p><br>  <br><strong><em>Figure 1: Infor Data Fabric &#8211; data storage and processing options<\/em><\/strong><\/p>\n\n\n\n<p>As data professionals, we treat all data with the same level of importance. Each system we connect to is simply another source for the data repository we\u2019re building, whether it\u2019s an expensive ERP or a collection of files in a folder. After all, &#8220;The true value of a system lies not in its cost, but in the richness of the data it contains.&#8221;<\/p>\n\n\n\n<p>Ultimately, this means we have to treat CloudSuite like any other data source and simply ask, \u201cgive us the darn data.\u201d And it looks like Infor has finally cracked the code with Stream Pipelines.<\/p>\n\n\n\n<p>Stream Pipelines provide a way to replicate tables from within CloudSuite to outside of the Infor OS in real time. When data events happen in CloudSuite, they\u2019re immediately captured by a component called Streaming Ingestion to ingest into Data Fabric. Stream Pipelines kicks in at this point, processing each event in real time and pushing it directly to a database destination, bypassing landing the data lake. We can now connect to the database destination for reporting, analytics, and real-time data needs.<\/p>\n\n\n\n<p>Stream Pipelines is an add-on license to Data Fabric. It is an easy configuration once you\u2019ve obtained it leaving you to only configure a data destination using one of the supported systems that include PostgreSQL on Azure, AWS Aurora, Snowflake, and, more recently, Azure SQL Database. This is CloudSuite data replicated in real time outside of Infor\u2019s infrastructure \u2014as just another source.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1386\" height=\"889\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/word-image-106911-2.png\" alt=\"\" class=\"wp-image-106913\" srcset=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/word-image-106911-2.png 1386w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/word-image-106911-2-300x192.png 300w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/word-image-106911-2-1024x657.png 1024w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/word-image-106911-2-768x493.png 768w\" sizes=\"auto, (max-width: 1386px) 100vw, 1386px\" \/><\/figure>\n\n\n\n<p><br>  <strong><em>Figure 2: Infor Data Fabric &#8211; recommended method to surface CloudSuite data<\/em><\/strong><\/p>\n\n\n\n<p><strong>References<\/strong><\/p>\n\n\n<div class=\"block-core-list\">\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/docs.infor.com\/inforos\/2024.x\/en-us\/useradminlib_cloud\/default.html?helpcontent=datafabrug\/vku1701167022619.html\">Stream Pipelines Concepts<\/a> <\/li>\n<\/ul>\n<\/div>\n\n\n<p>Real-time ingestion also pushes data to the data lake in a queued manner \u2014 ready for you to keep building with Infor\u2019s tools, like Birst. However, that\u2019s not what we are looking for.<\/p>\n\n\n\n<p>Once the Stream Pipelines are set up to synchronize to a data destination, I have now the choice to tap into fully functional data and analytics tools at my disposal to build comprehensive analytics solutions, with the flexibility that I need, like the extensive suite of Azure data services and my new favorite, Microsoft Fabric.<\/p>\n\n\n\n<p>But first, let\u2019s look at setting up Stream Pipelines.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-setting-up-stream-pipelines\">Setting up Stream Pipelines<\/h2>\n\n\n\n<p>You set up Stream Pipelines in two parts; first you prepare the data destination, and then you model the pipelines. You can then start the pipelines and monitor them.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-preparing-the-data-destination\">Preparing the data destination<\/h3>\n\n\n\n<p>The first step is to build your destination; the database in which CloudSuite data is going to magically spawn. I chose an Azure SQL Database.<\/p>\n\n\n\n<p>You then sign into Infor OS, access Data Fabric and navigate to the Pipelines section. You start by creating a destination from under Destination, which must connect to the Azure SQL Database you\u2019ve spun up.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"814\" height=\"720\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-6.png\" alt=\"A screenshot of a computer\n\nAI-generated content may be incorrect.\" class=\"wp-image-106914\" srcset=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-6.png 814w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-6-300x265.png 300w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-6-768x679.png 768w\" sizes=\"auto, (max-width: 814px) 100vw, 814px\" \/><\/figure>\n\n\n\n<p><br><strong><em>Figure 3: Creating the data destination<\/em><\/strong><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-modeling-the-pipeline\">Modeling the pipeline<\/h3>\n\n\n\n<p>From under Stream Pipelines, you start by creating a new pipeline and setting up a simple source-to-destination mapping.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"814\" height=\"720\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-7.png\" alt=\"A screenshot of a computer\n\nAI-generated content may be incorrect.\" class=\"wp-image-106915\" srcset=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-7.png 814w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-7-300x265.png 300w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-7-768x679.png 768w\" sizes=\"auto, (max-width: 814px) 100vw, 814px\" \/><\/figure>\n\n\n\n<p><br><strong><em>Figure 4: Creating a pipeline for the CloudSuite MITMAS table<\/em><\/strong><\/p>\n\n\n\n<p>The source is called a Subscription\u2014basically, you&#8217;re subscribing to a data object (table) that pushes out events. The destination is called a Delivery, where you set up a table on your Azure SQL Database to receive those events.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1209\" height=\"720\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-8.png\" alt=\"A screenshot of a computer\n\nAI-generated content may be incorrect.\" class=\"wp-image-106916\" srcset=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-8.png 1209w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-8-300x179.png 300w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-8-1024x610.png 1024w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-8-768x457.png 768w\" sizes=\"auto, (max-width: 1209px) 100vw, 1209px\" \/><\/figure>\n\n\n\n<p><br><strong><em>Figure 5: Creating the subscription for the CloudSuite MITMAS table<\/em><\/strong><\/p>\n\n\n\n<p>At this point, things can get a bit tricky, though. You\u2019ll need to have the destination table already created in your SQL database. The catch? CloudSuite gives you a JSON definition of the entity, not a ready-to-go SQL script.<\/p>\n\n\n\n<p>So, you\u2019ll need to figure out the SQL yourself. You could use the Infor ION APIs to build a small app that generates the script, or, as a colleague of mine tried out, get an AI copilot to turn the JSON into SQL. It takes a bit of effort, but you\u2019ll end up with a working table definition.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1280\" height=\"688\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-9.png\" alt=\"A screenshot of a computer\n\nAI-generated content may be incorrect.\" class=\"wp-image-106917\" srcset=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-9.png 1280w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-9-300x161.png 300w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-9-1024x550.png 1024w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-9-768x413.png 768w\" sizes=\"auto, (max-width: 1280px) 100vw, 1280px\" \/><\/figure>\n\n\n\n<p><br><strong><em>Figure 6: Creating the delivery to capture the CloudSuite MITMAS table<\/em><\/strong><\/p>\n\n\n\n<p>Once you pick the table on the delivery side, the columns will auto-map, as long as the names match exactly. You\u2019ll still want to double-check that every property in the data object is properly mapped to a column in your table. You won\u2019t see any errors if there are mismatches during the mapping exercise or after, until you start the pipeline and find out that captured events are being logged in the replay queue instead of getting delivered to the database.<\/p>\n\n\n\n<p>Mapping also includes choosing if you want to insert the events or upsert them. Inserting will land multiple versions of a record each time it gets updated, giving you a history of changes, while upserting will give you a 1:1 of the source.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-running-the-pipeline\">Running the pipeline<\/h2>\n\n\n\n<p>You run the pipeline by kicking off an initial load. You can choose between all the data on your data object, or data within a specified period as your initial data load. Kicking off includes starting the pipeline and running the initial load. Once the initial load has completed, the pipeline will continue to run picking up new events and sending them to the delivery destination.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"738\" height=\"720\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-screen-ai-generated-co.png\" alt=\"A screenshot of a computer screen\n\nAI-generated content may be incorrect.\" class=\"wp-image-106918\" srcset=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-screen-ai-generated-co.png 738w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-screen-ai-generated-co-300x293.png 300w\" sizes=\"auto, (max-width: 738px) 100vw, 738px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p><strong><em>Figure 7: Execution of the Stream Pipeline<\/em><\/strong><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-pipeline-exceptions\">Pipeline exceptions<\/h3>\n\n\n\n<p>When your stream pipelines run, there are chances of exceptions occurring. There are several reasons why they may occur. For instance, events over 4.5MB are not supported; so, if an event larger than this occurs it will be caught as an exception. If the database destination is offline, that event will be caught as a connectivity exception. If the JSON of the event is malformed it will cause an exception marking the event as \u2018discard only\u2019. Similarly, many different types of exceptions can occur.<\/p>\n\n\n\n<p>All exceptions are sent to the <em>replay queue<\/em>. After examining the exceptions, you can take necessary corrective actions, and execute the replay queue, which will re-attempt the events. However, some events will never go through, such as exceptions marked as \u2018discard only\u2019 including the events that are larger than 4.5MB. Others such as those with connectivity issues will go through if connectivity is re-established.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"726\" height=\"720\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-10.png\" alt=\"A screenshot of a computer\n\nAI-generated content may be incorrect.\" class=\"wp-image-106919\" srcset=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-10.png 726w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-10-300x298.png 300w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-10-150x150.png 150w\" sizes=\"auto, (max-width: 726px) 100vw, 726px\" \/><\/figure>\n\n\n\n<p><br><strong><em>Figure 8: Replay queue of the Stream Pipeline<\/em><\/strong><\/p>\n\n\n\n<p><strong>References<\/strong><\/p>\n\n\n\n<p>This link provides a list of all exception types:<\/p>\n\n\n<div class=\"block-core-list\">\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/docs.infor.com\/inforos\/2024.x\/en-us\/useradminlib_cloud\/default.html?helpcontent=datafabrug\/xkn1678806087778.html\">Error Types<\/a> <\/li>\n<\/ul>\n<\/div>\n\n\n<h2 class=\"wp-block-heading\" id=\"h-monitoring\">Monitoring<\/h2>\n\n\n\n<p>Once you\u2019re set up with Stream Pipelines, you\u2019ll likely need to monitor things more closely for the first few days (Figure 7). After that, occasional check-ins are usually enough. That said, monitoring tools are pretty limited; there\u2019s not much built in, and even notifications are restricted to a panel within each pipeline (Figure 8).<\/p>\n\n\n\n<p><strong>References<\/strong><\/p>\n\n\n<div class=\"block-core-list\">\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/docs.infor.com\/inforos\/2024.x\/en-us\/useradminlib_cloud\/default.html?helpcontent=datafabrug\/sed1701261173075.html\">Pipeline Monitoring Concept<\/a> <\/li>\n\n\n\n<li><a href=\"https:\/\/docs.infor.com\/inforos\/2024.x\/en-us\/useradminlib_cloud\/default.html?helpcontent=datafabrug\/dwl1678806677786.html\">Listing Failed Events<\/a> <\/li>\n<\/ul>\n<\/div>\n\n\n<p>You can also monitor the number of events used up, and the cloud egress of events pushed out of Infor OS through the Infor Concierge.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-pricing\">Pricing<\/h2>\n\n\n\n<p>Stream Pipelines are an add-on SKU that needs to be purchased separately to Data Fabric. Pricing tiers are based on 165,000 events per day. Pricing is not published on Infor\u2019s documentation, meaning you need to contact Infor for this.<\/p>\n\n\n\n<p>You also need to be mindful of cloud egress, another piece that will cost you for the amount of data being pushed outside of the Infor OS.<\/p>\n\n\n\n<p><strong>References<\/strong><\/p>\n\n\n<div class=\"block-core-list\">\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/docs.infor.com\/inforosulmt\/xx\/en-us\/usagelimits\/default.html?helpcontent=kcd1720800867249.html\">Libraries \\ Additional Services<\/a> <\/li>\n\n\n\n<li><a href=\"https:\/\/www.red-gate.com\/simple-talk\/databases\/sql-server\/database-administration-sql-server\/storage-101-cloud-storage\/\" target=\"_blank\" rel=\"noreferrer noopener\">Cloud Egress<\/a><\/li>\n<\/ul>\n<\/div>\n\n\n<h2 class=\"wp-block-heading\" id=\"h-application-areas\">Application areas<\/h2>\n\n\n\n<p>Now that you\u2019ve got your data synced out from CloudSuite, let\u2019s look at what you can do with it across the business. It\u2019s a whole new world!<\/p>\n\n\n\n<p>Remember, your CloudSuite destination is now your source.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-data-landing\">Data landing<\/h3>\n\n\n\n<p>A good place to start is by landing the raw data in a centralized store\u2014ideally a data lake. Platforms like Microsoft Fabric make this even smoother by letting you push the data straight into a data lakehouse, where it shows up as tables instead of just files. On platforms like Databricks, you push the data first into a data lake storage as files and then structure them into tables. No matter which method you use, once your CloudSuite data is in one place, especially alongside other business data, you can start bringing things together for a whole range of use cases. And by applying standardization and curation, you can take it a step further, making the data more refined, consistent, and ready for those use cases to deliver real value.<\/p>\n\n\n\n<p>You can use standard extract, load, and transform (ELT) approaches to land and shape the data\u2014typically through data pipelines and Python notebooks in Microsoft Fabric or Databricks, if we stick with that example.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"603\" height=\"440\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/word-image-106911-9.png\" alt=\"\" class=\"wp-image-106920\" srcset=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/word-image-106911-9.png 603w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/word-image-106911-9-300x219.png 300w\" sizes=\"auto, (max-width: 603px) 100vw, 603px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p><strong><em>Figure 9: Example of data landing of CloudSuite data on Microsoft Fabric<\/em><\/strong><\/p>\n\n\n\n<p>You can even mirror the source, so the data shows up inside Fabric in near real-time within a mirrored database. This is a great option if this is your sole data source, or if you don\u2019t have an elaborate strategy or path for analytics in your organization just yet.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-analytics-amp-business-intelligence\">Analytics &amp; Business intelligence<\/h3>\n\n\n\n<p>The next logical step is to bring one of those use cases to life\u2014and what better place to start than with analytics?<\/p>\n\n\n\n<p>When you landed the data, you weren\u2019t just storing it; you were already starting to curate it. A common approach here is the medallion architecture, where data flows through multiple layers of curation and transformation. Some of that work happens right at the landing stage, but the final stretch is all about shaping the data for analytics. This is where you aggregate, structure, and organize it in a way that supports meaningful reporting and strategic decision making.<\/p>\n\n\n\n<p>You can take it a step further by layering in business logic, adding calculations and models that make the data more intuitive and ready for business users to explore through BI tools, such as dashboards and analytical reports.<\/p>\n\n\n\n<p>To build this out in Microsoft Fabric, you can use a data lakehouse for both storage and analytical processing or go with an equally effective option like a data warehouse. On top of that, you can layer a semantic model to handle your business logic, making the data more structured and intuitive for analysis and reporting. And once your semantic model is in place, it\u2019s ready to power your reporting layer, whether it\u2019s in Power BI, Excel or any other tool your teams use.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1211\" height=\"504\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-11.png\" alt=\"A screenshot of a computer\n\nAI-generated content may be incorrect.\" class=\"wp-image-106921\" srcset=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-11.png 1211w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-11-300x125.png 300w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-11-1024x426.png 1024w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-11-768x320.png 768w\" sizes=\"auto, (max-width: 1211px) 100vw, 1211px\" \/><\/figure>\n\n\n\n<p><br> <strong><em>Figure 10: Example of analytics and business intelligence off CloudSuite data on Microsoft Fabric<\/em><\/strong><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-transactional-reports\">Transactional reports<\/h3>\n\n\n\n<p>Transactional reports, sometimes referred to as operational reports, focus on individual business events and typically cover a fixed, recent period rather than historical trends. They\u2019re often straightforward to build. In Microsoft Fabric, you can use paginated reports to create pixel-perfect layouts, which are especially useful when print-ready formats are needed. You can build these reports directly off the source database or keep everything within Fabric by using the mirrored database. And depending on your workload and organizational requirements, you can easily scale the setup or configure high availability to ensure smooth operations without disruptions.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"571\" height=\"504\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-diagram-of-a-software-application-ai-generated.png\" alt=\"A diagram of a software application\n\nAI-generated content may be incorrect.\" class=\"wp-image-106922\" srcset=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-diagram-of-a-software-application-ai-generated.png 571w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-diagram-of-a-software-application-ai-generated-300x265.png 300w\" sizes=\"auto, (max-width: 571px) 100vw, 571px\" \/><\/figure>\n\n\n\n<p><br> <strong><em>Figure 11: Example of transactional reports off CloudSuite data on Microsoft Fabric<\/em><\/strong><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-real-time-intelligence\">Real-time intelligence<\/h3>\n\n\n\n<p>While operational reports give you detailed, structured views of live data, real-time dashboards take it a step further, offering dynamic insights that update continuously as new data flows in. Unlike traditional business intelligence dashboards, which typically work off large historical datasets, these dashboards are all about the \u201cnow.\u201d<\/p>\n\n\n\n<p>To make this work, the source database must be configured with change data capture (CDC) so that updates are pushed out as they happen. These changes can then be picked up by Eventstreams in Microsoft Fabric, routed to an Eventhouse (a streaming equivalent to a lakehouse), and finally surfaced through to a real-time dashboard.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"691\" height=\"504\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-12.png\" alt=\"A screenshot of a computer\n\nAI-generated content may be incorrect.\" class=\"wp-image-106923\" srcset=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-12.png 691w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-12-300x219.png 300w\" sizes=\"auto, (max-width: 691px) 100vw, 691px\" \/><\/figure>\n\n\n\n<p><br><em><strong>Figure 12: Example of real-time intelligence off CloudSuite data using Microsoft Fabric<\/strong><\/em><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-advanced-analytics\">Advanced analytics<\/h3>\n\n\n\n<p>While the aggregated data in the analytics layer we talked about earlier supports descriptive and, to some extent, diagnostic analytics, you can take things further by tapping into the data stored in your lakehouse to uncover predictive and prescriptive insights. You can then use these insights to enrich your aggregated datasets and business calculations, making your strategic decision-making even sharper.<\/p>\n\n\n\n<p>Using Spark notebooks in Microsoft Fabric, you can build and run machine learning models directly on your lakehouse data to generate those predictive outcomes<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1411\" height=\"504\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-13.png\" alt=\"A screenshot of a computer\n\nAI-generated content may be incorrect.\" class=\"wp-image-106924\" srcset=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-13.png 1411w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-13-300x107.png 300w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-13-1024x366.png 1024w, https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2025\/05\/a-screenshot-of-a-computer-ai-generated-content-m-13-768x274.png 768w\" sizes=\"auto, (max-width: 1411px) 100vw, 1411px\" \/><\/figure>\n\n\n\n<p><br> <strong><em>Figure 13: Example of including advanced analytics into a Microsoft Fabric solution using CloudSuite data<\/em><\/strong><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-artificial-intelligence\">Artificial intelligence<\/h3>\n\n\n\n<p>And how can we forget AI? It fits right into everything we\u2019ve talked about so far. Whether it\u2019s helping out with data engineering tasks, generating dashboards, or even guiding your analysis; it\u2019s everywhere. In Microsoft Fabric, you\u2019ll find a copilot baked into almost every area: Power BI, data warehouses, Data Factory, real-time intelligence, data engineering, and data science. And this is just the beginning; and you can only imagine it\u2019s going to get better from here.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-closing-remarks\">Closing remarks<\/h2>\n\n\n\n<p>From landing your data to transforming it for analytics, operational reporting, and real-time dashboards, we\u2019ve looked at the different ways you can start putting your CloudSuite data (and data from other systems) to real use. By centralizing and curating your data, layering on business models, and even adding predictive capabilities, you&#8217;re laying the groundwork for insight-driven decisions across the board.<\/p>\n\n\n\n<p>But remember, it all starts with getting that data out of CloudSuite, and for that, Stream Pipelines is your best option right now, with even Infor recommending it. While Infor\u2019s Data Fabric might seem like a logical pick, especially paired with Birst, I\u2019ve seen more and more customers turning to external tools like Power BI and the Microsoft analytics stack. As we saw throughout this article, it\u2019s a capable setup, with a wide range of options, only really limited by how far you want to take it.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In my experiences building end-to-end analytics solutions, I often need to connect to a variety of different systems to bring data in. Most of the time, this process is straightforward. However, there are cases where I\u2019ve had to jump through a few hoops\u2014such as using staged databases, secured connections, or other mechanisms\u2014enforced by the organization&#8230;&hellip;<\/p>\n","protected":false},"author":76963,"featured_media":106926,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[159160,53],"tags":[159325,159324],"coauthors":[159135],"class_list":["post-106911","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-business-intelligence","category-featured","tag-infor-cloudsuite","tag-infor-cloudsuite-stream-pipelines"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/106911","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/users\/76963"}],"replies":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/comments?post=106911"}],"version-history":[{"count":7,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/106911\/revisions"}],"predecessor-version":[{"id":110359,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/106911\/revisions\/110359"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/media\/106926"}],"wp:attachment":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/media?parent=106911"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/categories?post=106911"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/tags?post=106911"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/coauthors?post=106911"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}