SaaS Losers and Winners: Workday and NetSuite

“Well, I’ll be damned
Here comes your ghost again
But that’s not unusual…”

“Diamonds and Rust”, Joan Baez

I’ve said in the past that I consider it a travesty when a SaaS provider doesn’t give you direct access to the data in its native storage, such as by ODBC and OLE DB providers, and forcing you to use file extracts or APIs (often horrible). This greatly inhibits data integration scenarios, such as extracting data for data warehousing. I wrote on this subject many times, including here, here and here. Let’s look at how two popular vendors have approached data access.

SaaS Loser: Workday

Workday is a popular HR cloud platform. Unfortunately, the only way to get your data out is via their horrible APIs, as a large law firm has discovered. If you use Workday, I recommend you explore the Workday Query Language (WQL) so you don’t have to parse the API results yourself. A third-party (CDATA) driver makes the WQL calls easier but if you don’t want to rely on external software, try the following approach.

Workday’s REST API, particularly the /data endpoint, supports WQL queries directly, allowing custom applications to execute WQL without a third-party tool. The /data endpoint (e.g., https://wd2-impl-services1.workday.com/ccx/api/wql/v1/tenant_name/data) accepts WQL queries, including joins based on Workday’s object relationships. Once you enable WQL, you can use ADF or another data integration tool to send SQL-like queries:

sql
SELECT w.Worker_ID, w.First_Name, o.Name
FROM Workers w
LEFT JOIN Organizations o ON w.Organization_ID = o.ID
WHERE w.Active = 1

Results are returned as JSON which you need to flatten as a dataset. At least, you don’t have to parse the horrible JSON nested payload yourself!

SaaS Winner: NetSuite

NetSuite is a very popular ERP software. I’m currently amid a data integration project with NetSuite. NetSuite has provided two interfaces:

  • REST (and SOAP) API: This is your best bet if you want to automate writes to NetSuite, such as creating orders and invoices. The REST interface also supports sending SQL queries via the SuiteQL endpoint (https://<account_id>.suitetalk.api.netsuite.com/services/rest/query/v1/suiteql). This is a good option if you want to use SQL for lookups and queries that don’t return a lot of data.
  • ODBC: Use this option to extract massive datasets, such as when loading a data warehouse. To dump some cold water on NetSuite here, not many integration tools come with native NetSuite providers, so you must resort to the NetSuite ODBC driver (this requires a self-hosted runtime if you use Azure Data Factory). Further, since being an Oracle product, NetSuite probably uses an Oracle backend database, performance is suboptimal, as I have come to expect every time I integrate with Oracle. But a better option for sure than dealing with REST APIs.

A person sitting at a computer AI-generated content may be incorrect.

 

Prologika Newsletter Summer 2025

computer memory with queries executing and Microsoft Fabric logo. Image 4 of 4

The May release of Power BI Desktop added a feature called Translytical Task Flows which aims to augment Power BI reports with rudimentary writeback capabilities, such as to make corrections to data behind a report. Previously, one way to accomplish this was to integrate the report with Power Apps as I demonstrated a while back here. My claim to fame was that Microsoft liked this demo so much that it was running for years on big monitors in the local Microsoft office!

Are translytical flows a better way to implement report writeback? I followed the steps to test this feature and here are my thoughts.

The Good

I like that translytical flows don’t require external integration and additional licensing. By contrast, the Power Apps integration required implementing an app and incurring additional licensing cost.

I like that Microsoft is getting serious about report writeback and has extended Power BI Desktop with specific features to support it, such as action buttons, new slicers, and button-triggered report refresh.

I like that you can configure the action button to refresh the report after writeback so you can see immediately the changes (assuming DirectQuery or DirectLake semantic models). I tested the feature with a report connected to a published dataset and it works. Of course, if the model imports data, refreshing the report won’t show the latest.

The Bad

Currently, you must use the new button, text, or list slicers, which can only provide a single value to the writeback function. No validation, except if you use a list slicer. From end user experience, every modifiable field would need a separate textbox. This is horrible UI! Ideally, I’d like to see Microsoft extending the Table visual to allow editing in place.

Translytical flows require a Python function to propagate the changes. Although this opens new possibilities (you can do more things with custom code), what happened to low-code, no-code mantra?

The Ugly

Currently, the data can be written to only four destinations:

  • Fabric SQL DB (Azure SQL DB provisioned in Fabric)
  • Fabric Lakehouse
  • Fabric Warehouse
  • Fabric Mirrored DB

Notice the “Fabric” prefix in all four options? Customers will probably interpret this as another attempt to force them into Fabric. Not to mention that this imitation excludes 99% of real-life scenarios where the data is in other data sources. I surely hope Microsoft will open this feature to external data sources in future.

So, is the Power Apps integration for writeback obsolete? Not really because it is more flexible and provides better user experience at the expense of additional licensing cost.

In summary, as they stand today, transalytical task flows attempt to address basic writeback needs within Fabric, such as changing a limited number of report fields or performing massive updates on a single field. They are heavily dependent on Fabric and support writeback to only Fabric data sources. Keep an eye on this feature with the hope that it will evolve over time to something more useful.

 


Teo Lachev
Prologika, LLC | Making Sense of Data
logo

Live Monitoring of Fabric Mirroring

A current project called for mirroring a Google BigQuery dataset to Fabric. This feature is currently in private preview so don’t try to find it. However, the tips I share here should be applicable to other available mirroring scenarios, such as mirroring from Azure SQL Database.

One of the GBQ tables was a transaction fact table with some 130 million rows. The issue was that the mirroring window would show this table as normally replicating table with Running green status, but we waited and waited and nothing was happening…

A screenshot of a computer AI-generated content may be incorrect.

First, the “Rows Replicated” counter is not continuously updating (doesn’t report live mirroring info). Instead, it’s updated when the table is fully replicated. That’s because the Delta table operations are atomic in nature.

So, how do we do live monitoring of the initial synchronization then? We can use the Azure Storage Explorer to access directly the delta files of the mirrored database as described here. After some experimenting (shorten the URL below to the workspace name only to see all OneLake artifacts in the workspace), I’ve found that the itemType of a mirrored database is MountedRelationalDatabase. So, the URL to an Adventure Works mirrored database in a workspace named Fabric Tutorial would be
https://onelake.dfs.fabric.microsoft.com/Fabric Tutorial/Adventure Works.MountedRelationalDatabase/

This lets us get to the underlying delta tables and delta logs in Azure Storage Explorer which are of course updating as data is being replicated. Examining the underlying storage is also useful to detect potential issues. Here is an example of the first table that was successfully replicated. Note that there is a parquet file containing the actual data.

And here is an example of the second table. As we can see, the delta log was created but there is no parquet file. So, something went wrong although according to Fabric all is well.

To summarize, the Fabric mirroring UI has left a lot of room for improvement. By examining the delta log file size and delta files of a specific table we can discover how the mirroring session of the initial synchronization is progressing.