Virtual TechTalk

Hear from Google Cloud Experts:  How to Scale Data Integration to and from Google BigQuery: Watch On-Demand

Watch Now

The Science of Practical Data Fabric – Part 1

According to Gartner, data fabric was one of the top technology trends for 2022. Why? Data fabric can simplify an organization’s data integration infrastructure and create a scalable architecture that reduces integration challenges. Data fabric can also reduce data management efforts by up to 70%, accelerating time-to-value.


In a recent webinar by the author of Data Fabric: The Next Step in the Evolution of Data Architectures (published January 2023), Jay Piscioneri of Eckerson Group discussed this comprehensive report with Saket Saurabh, CEO of Nexla, and introduced practical data fabric with some new level of insights. In this multi-part blog, we will do a quick recap of key learnings from the webinar.

  • Introduction to data fabric
  • Accelerating data fabric through automation
  • Metadata Intelligence
  • Auto-scale pipelines (with customer example)
  • Creating Data product
  • Auto-generated Connectors
  • Complete data lifecycle


What is data fabric?

A data fabric is an emerging data management design that captures the end-to-end integration and management of all data within a system, including sources, storage, pipelines, analytics, and applications.

The metaphorical “fabric” in a “data fabric” refers to the idea of viewing your organization’s data as a single integrated network layer versus a siloed collection of point-to-point connections.

Approaching your data as a fabric can help you create flexible, reusable, and augmented data integration pipelines that utilize knowledge graphs, semantics and active metadata-based automation. As a result, the data fabric aids in supporting faster—and in some cases, automated—data access and sharing regardless of deployment options, use cases (operational or analytical), and/or architectural approaches.

Why data fabric?

Data fabric serves as a backbone for self-service analytics. It translates business needs into system designs that deliver data throughout the enterprise to support current and future requirements. Data fabric provides a unified view of disparate and distributed data and supports any type of workload, from business intelligence to ad hoc analytics to data science. It does so by mapping data from disparate applications — within the underlying data stores, regardless of original deployment designs and locations — and makes them ready for business exploration.

How does data fabric help in practice?

In practice, a data fabric helps automate common data engineer tasks: integration, data clean, data ingestion, performance optimization, monitoring. This allows data engineers to focus their efforts where their expertise and creativity can add value, such as pipeline engineering over pipeline creation. This will help scale your pipeline (we will discuss that in our next blog).


When we look at the daily tasks of data engineers from a practical day-to-day perspective, we realize that you have to perform some necessary tasks. As a practitioner, you have to integrate data and prepare data. You have to ensure data quality and monitor data and so on. According to Saket, these tasks haven’t really changed in decades but data fabric is changing how these tasks are done. It does this by bringing in automation. The secret source behind automation: metadata intelligence (we will discuss that in our next blog).


Through automation, data fabric easily connects to multiple data systems to enable dynamic experiences from existing and newly available data points, leading to timely insights and decisions. These experiences are very different from the static experiences of reports and dashboards. For example, using a data fabric, supply chain analysts can connect supplier delays with production delays as and when these data points are available. Thus, the data fabric allows the analysts to identify developing risks and make informed decisions in real time.

Introducing automation to tasks like data integration, data discovery, and data quality assurance leads to fewer manual inputs and increased data access. It’s easier to set up and to maintain, more reliable, more scalable and therefore allows you to directly manage the increasing variety of complexity and applications.

In this blog we discussed how data fabric accelerates data adoption through accurate and reliable automated integrations. It allows business users to consume data with confidence, and also enables less-skilled citizen developers to become more involved in the integration and modeling process. In the next blog, we will discuss how data fabric can scale your practice with autoscaling pipelines. You can view the recording of the webinar by the author of Data Fabric: The Next Step in the Evolution of Data Architectures by clicking here.

Unify your data operations today!

Discover how Nexla’s powerful data operations can put an end to your data challenges with our free demo.