We now support Azure Synapse SQL. You can use Azure Synapse SQL in data migration, incremental data migration, and persistent staging tasks as a source and target connection. Use of this connector includes the ability to select the ‘Copy’ function of Azure Synapse. If selected you can choose your Azure blob storage connection. You can use Azure Synapse’s copy function for data migration and persistent staging tasks, excluding incremental data migrations.
We have also added support for Azure Synapse Spark Pool. You can use Azure Synapse Spark Pool with Spark SQL and SQL scripts against Azure Synapse SQL.
Due to the above change, you may encounter a scenario where records loaded after you update the agent may appear before existing rows in your persistent staging output.
This scenario can occur when the following conditions are met;
If you meet these conditions then you should do either of the following when you update your Loome Agent;
We have added a warning notification when selecting a currency column to ensure that there are no irregularities in the data of your selected column. To ensure the data in the column is consistent we recommend using the ‘LoadDateTime’ column.
You can now add new columns to your persistent staging tasks. You can also change key columns and which columns are included in persistent staging. Depending on your selected connection, you can also change the column definition and persistent staging will modify the column when run.
We have added an ‘LM_IsManual’ column to persistent staging output tables. This will prevent manual records from being marked as deleted, when external applications, such as Loome Monitor, set the value to true.
To improve performance of persistent staging, we have added a clustered index to SQL Server persistent staging output tables.
We have added a warning to persistent staging tasks when data will be truncated. These warnings will appear when an individual column length is greater than 900 or when the total column length of selected key column(s) exceeds the recommended total of 2600, as it may cause data to be truncated when creating a clustered index.
We have also added ‘LM__CollectionCreated’, which indicates the creation date of each record, into persistent staging output tables to improve performance of persistent staging.
To further improve the performance of persistent staging, we now create a current table instead of a current view when generating output tables.
We have standardized column names in persistent staging to align with Loome branding.
We have standardised column names in the Persistent Staging output tables to prefix with LM__, if you have any downstream processes that read these columns then you will need to modify these processes once you have updated the Loome Agent.
We have resolved an issue where the daily cleanup of jobs failed.
We have also fixed an issue where Shipyard orphaned jobs due to the pool being shut down before the last job executed, which left it orphaned.
We resolved the issue when a server was restarted causing jobs to be stuck in a ‘Processing’ state. The job would not execute again until it was cancelled and re-run.
We also fixed an issue where schedules that overlapped days in UTC would fail to run.
We resolved the issue where the log cleanup did not execute correctly, which caused execution history logs to not load.
We fixed an issue where previously setting the destination table name in a Persistent Staging task caused an error that stated that the table cannot be found.
We have also fixed an issue where Shipyard had previously failed to run using a Linux agent, due to tasks failing because of a permission error.
A Shipyard HPC script task that had selected ‘Shared Volume Connection’ would fail. We have updated the UI so that ‘Volume File Definition’ is also selected when choosing the ‘Shared Volume Connection’ to fix this issue.
Logs in the execution history of a job loaded slowly due to many jobs logging information to the applog table. We have added an index to the AppLog table to fix this.
We fixed an issue where you could not create a Spark SQL Statement task using a Databricks connector.
We have fixed an issue where the cluster definition was displayed in SQL Statement tasks for connections other than Databricks.
We fixed the issue where persistent staging failed if the source and target database used the same connection.
We also fixed an issue where you could not create a new tenant when you were already logged in to a tenant.
We have fixed the issue where the agent signalR was not reconnecting after a network issue.
Minimum Version Number: 2021.03.18.01