Easily fetch rich monitoring metrics for any data flow
Background
With almost any data flow, monitoring is a critical piece to ensure records are flowing without error. Setting up a data flow is only the first step – from there, powerful and useful monitoring ensures the flow is still moving and makes triaging errors much easier. Nexla’s new Monitoring Connector now makes it easier than ever to fetch monitoring metrics on any source, flow, or individual data product in Nexla.
Add Monitoring to Any Flow
Nexla’s Monitoring Connector is now available to fetch rich monitoring metrics from any resource in Nexla. Simply point the connector to the ID of the resource, then use Nexla’s own tools to transform and send that dataset to the destination of your choice. Whether that’s an enterprise monitoring tool like DataDog or PagerDuty for error alerting, a real-time dashboard like Tableau or Looker to visualize data flows, or as a developer resource in a database or table, Nexla’s universal connectors make it easy to utilize that monitoring data. Fetching this monitoring data is straightforward with Nexla’s Monitoring Connector.
1. Find your Resource’s ID
First, grab the ID of the resource you want to monitor. In Nexla, each source, dataset, and destination has a unique ID and all of these can be monitored. In the example below, to monitor the data flowing out of a Snowflake database I’ll use ID 11906.
2. Create your Monitoring Source
Create a new flow in Nexla and select the Monitoring Connector source.
Set the connector to either Flow Metrics or Active Resource Performance mode, and input the resource ID. Click Test and instantly get a preview of the monitoring data you’re fetching. Set how often you want the connector to fetch metrics on your resource (daily, weekly, hourly) and you’re done.
3. Send your Monitoring Data Anywhere
Now that your source has been added, work with the data product by transforming as necessary, or send it right to any destination you need.
Conclusion
Nexla’s new Monitoring Connector makes it easier than ever to monitor and get important flow, dataset, and destination metrics where you need them. Whether you’re a developer triaging errors and debugging a flow or an analyst making sure all of your data flowed to the right place, the Monitoring Connector makes it easy to fetch those metrics in real-time and check flows at a glance. For any data flow, from anywhere to anywhere, just drop in the resource ID and you can get monitoring metrics right away.
The Delegation Test for AI: Why Context Engineering Determines Model Success
In episode five of DatAInnovators & Builders, GrowthX founder Marcel Santilli explains the delegation test for AI and why poor context, not weak models, is the real reason AI initiatives fail to scale.
The Data Governance Gap Blocking Enterprise AI Production
In episode four of DatAInnovators & Builders, BigID’s Stephen Gatchell explains the data governance gap blocking AI production, why unstructured data breaks legacy models, and how data product frameworks enable scale.
Why Context Engineering Is Key to the Next Era of Enterprise AI
In the News: betanews.com: In this Q&A, Saket Saurabh explains why context engineering is key to reliable, compliant, and intelligent enterprise AI workflows.