18th of March, 2021
Service
New Features
Manual Data Sets
You can now create an entirely manual data set or add manually entered rows to your source data using the Loome Monitor UI.
When manually adding rows to your source data, if you have created a row in the Loome Monitor UI but find that you already have a row for this record in your source data, you can easily merge the two rows together.
Any created data set will be available in your target connection if you would like to further integrate this data.
New Azure Synapse SQL Connector
We now support Azure Synapse SQL in Loome Monitor. You can stage your data both to and from Azure Synapse SQL.
Record Status
You can create a status and edit any existing record status after you have run a rule in Loome Monitor. This is based on the conditions you set and Loome will label a row with this status accordingly.
Azure Synapse SQL Connector
We now support Azure Synapse SQL in Loome Integrate. You can use Azure Synapse SQL in data migration, incremental data migration, and persistent staging tasks as a source and target connection. Use of this connector includes the ability to select the ‘Copy’ function of Azure Synapse. If selected you can choose your Azure blob storage connection. You can use Azure Synapse’s copy function for data migration and persistent staging tasks, excluding incremental data migrations.
Azure Synapse Spark Pool
We have also added support for Azure Synapse Spark Pool in Loome Integrate. You can use Azure Synapse Spark Pool with Spark SQL and SQL scripts against Azure Synapse SQL.
Improvements
Record Editing
- Monitor: We have improved the process of editing a record so that each record is edited on a new page. In this new page view, you can also easily move from one row to the next to streamline your record editing process.
- Monitor: When multiple users are viewing the same results page, they will now be notified of any changes to records on that rule’s page. You will see the latest records on the results page and if another user updates a row at the same time you are viewing this page, you will receive a notification. This also means that you will be informed that a row is being saved and will not be able to edit another row until this is saved.
- Monitor: We’ve added the ability to create longer notes on the results view and increased the character limit on the notes custom field. If you have notes fields where you would like to use this higher character number, update your agent, re-run your rule and you will be able to use up to 4000 characters.
You will need to manually modify the data type of existing notes fields.
- Monitor: We have improved our persistent staging processes to effectively stage your data, such as modifying key columns, and adding and removing columns.
- Monitor: To improve performance, we have moved metadata entities, i.e., custom fields, out of persistent staging. As well as improving performance, this will be more cost effective as it will not run persistent staging each time a record is updated. You can also add, remove, or change the name of metadata columns in your rule.
**Monitor:** Renaming column names will depend on your selected connection as this may not be applicable in the target output, for example it is not supported for Azure Synapse SQL.
- Monitor: We’ve added clustered indexes to our SQL Server connector. As the maximum number of bytes cannot exceed 900, we combined variable-length columns so that the combined data size of those columns do not exceed the limit when multiple columns are chosen as key columns. You will be notified when your selected columns exceed a limit that can cause your index to be clustered.
- Monitor: Communication rules will now be available after a rule is run once data is available in the results of a rule. Once you have run a rule, you can find communication rules on a rule’s edit page.
- Monitor: We have improved the user experience when a page is loading and provided more visual indicators.
- Monitor: We have added the option to access the results of a rule from its edit screen.
- Monitor: We have updated our slicers to use SlicerView instead of SlicerFunction.
- Monitor: We’ve updated the names of columns generated by Loome to be consistent with Loome branding. If you previously used columns such as DGStartDate, StartDate, and EndDate, they will now be called _LM_STARTDATE and _LMENDDATE.
- Monitor: We have updated the format of dates in Loome so that it is month, date, year, e.g., Sep 2, 2020.
UTC
- Integrate: We have updated our Data Migration and Persistent Staging tasks to use UTC times. LoadDateTime fields will now come through as UTC.
Due to the above change, you may encounter a scenario where records loaded after you update the agent may appear before existing rows in your persistent staging output.
This scenario can occur when the following conditions are met;
- Your LoadDateTime fields are currently loaded in local time
- Your UTC offset is a positive offset, i.e. +11.
- You run the job within the offset time after updating the agent, i.e. within 11 hours of updating the agent.
If you meet these conditions then you should do either of the following when you update your Loome Agent;
- Allow a window equal to the UTC offset of your time zone before running your job after updating your agent.
- Convert LoadDateTime values to UTC in the persistent staging output table prior to updating the agent and running the job again.
Persistent Staging Improvements
- Integrate: We have added a warning notification when selecting a currency column to ensure that there are no irregularities in the data of your selected column. To ensure the data in the column is consistent we recommend using the ‘LoadDateTime’ column.
- Integrate: You can now add new columns to your persistent staging tasks. You can also change key columns and which columns are included in persistent staging. Depending on your selected connection, you can also change the column definition and persistent staging will modify the column when run.
- Integrate: We have added an ‘LM_IsManual’ column to persistent staging output tables. This will prevent manual records from being marked as deleted, when external applications, such as Loome Monitor, set the value to true.
- Integrate: To improve performance of persistent staging, we have added a clustered index to SQL Server persistent staging output tables.
- Integrate: We have added a warning to persistent staging tasks when data will be truncated. These warnings will appear when an individual column length is greater than 900 or when the total column length of selected key column(s) exceeds the recommended total of 2600, as it may cause data to be truncated when creating a clustered index.
- Integrate: We have also added ‘LM__CollectionCreated’, which indicates the creation date of each record, into persistent staging output tables to improve performance of persistent staging.
- Integrate: To further improve the performance of persistent staging, we now create a current table instead of a current view when generating output tables.
- Integrate: We have standardized column names in persistent staging to align with Loome branding.
We have standardized column names in the Persistent Staging output tables to prefix with LM__, if you have any downstream processes that read these columns then you will need to modify these processes once you have updated the Loome Agent.
Fixes
- Monitor: We have fixed an issue where the number slicer for decimal columns was set at a stepping increment that was too high. We have increased the stepping for decimal columns.
- Monitor: We have fixed the incorrect names that displayed when creating a reference or glossary on the final button labels and query page.
- Monitor: We have fixed an issue where you could not select the ‘new record’ column from the column dropdown when creating a communication rule branch.
- Monitor: Fixed an issue where the reason a row was ignored did not display on the results page.
- Monitor: We fixed an issue where a number slicer would not load filtered results and caused an error notification.
- Monitor: We have fixed an issue where if a row was ignored while ‘Unresolved’ was the default status, it would not immediately update the results view and would still display the ignored row until the page was refreshed.
- Monitor: We have fixed the issue where a connection could not be changed when editing rules.
- Monitor: The name of a rule will now be displayed correctly in the execution slide-out when first running a rule.
- Integrate: We have resolved an issue where the daily cleanup of jobs failed.
- Integrate: We have also fixed an issue where Shipyard orphaned jobs due to the pool being shut down before the last job executed, which left it orphaned.
- Integrate: We resolved the issue when a server was restarted causing jobs to be stuck in a ‘Processing’ state. The job would not execute again until it was cancelled and re-run.
- Integrate: We also fixed an issue where schedules that overlapped days in UTC would fail to run.
- Integrate: We resolved the issue where the log cleanup did not execute correctly, which caused execution history logs to not load.
- Integrate: We fixed an issue where previously setting the destination table name in a Persistent Staging task caused an error that stated that the table cannot be found.
- Integrate: We have also fixed an issue where Shipyard had previously failed to run using a Linux agent, due to tasks failing because of a permission error.
- Integrate: A Shipyard HPC script task that had selected ‘Shared Volume Connection’ would fail. We have updated the UI so that ‘Volume File Definition’ is also selected when choosing the ‘Shared Volume Connection’ to fix this issue.
- Integrate: Logs in the execution history of a job loaded slowly due to many jobs logging information to the applog table. We have added an index to the AppLog table to fix this.
- Integrate: We fixed an issue where you could not create a Spark SQL Statement task using a Databricks connector.
- Integrate: We have fixed an issue where the cluster definition was displayed in SQL Statement tasks for connections other than Databricks.
- Integrate: We fixed the issue where persistent staging failed if the source and target database used the same connection.
- Integrate: We also fixed an issue where you could not create a new tenant when you were already logged in to a tenant.
- Integrate: We have fixed the issue where the agent signalR was not reconnecting after a network issue.
Agent
This update requires agent version - 2021.03.18.1
Please update your agent to use the features and improvements above.