Using an audit trail table on Oracle
Posted on Saturday, 5 October 2024
Implementing Auditable Updates in a Relational Database
In modern applications, maintaining an audit trail of changes to data is crucial for compliance, debugging, and data integrity. This blog post explores a straightforward approach to implementing auditable updates in a relational database system, specifically focusing on a project management scenario with hierarchical data.
Problem Description
We have a relational database containing Projects
, each of which includes Instruments
, Markets
, and Valuations
. These entities form a tree structure, adhering to the third normal form (3NF). Previously, any update to a project involved downloading the entire project tree, making changes, and uploading a new project under a new ID to ensure complete auditability.
This approach is inefficient for small updates and doesn't allow for granular tracking of changes. The goal is to enable small, precise updates to projects while maintaining a comprehensive audit trail of all changes.
Solution Overview
We introduce an audit table that records every change made to the database. The audit table will store serialized JSON representations of operations like update
, insert
, and delete
. We'll also provide C# code to apply and revert these changes, effectively creating an undo stack.
Let's use the following DB Schema for illustration:
- Primary Keys: Each table has a primary key (e.g.,
ProjectID
,InstrumentID
). - Foreign Keys: Child tables reference their parent via foreign keys (e.g.,
instruments.ProjectID
referencesprojects.ProjectID
). - Audit Table: The
change_audit
table records changes with fields likeChangeAuditID
,TimeApplied
, andImpactJson
.
Implementing Change Auditing
The change_audit
table is designed to store all changes in a JSON format for flexibility and ease of storage.
CREATE TABLE change_audit (
ChangeAuditID NUMBER PRIMARY KEY,
TimeApplied TIMESTAMP,
UserID VARCHAR2(100),
ImpactJson CLOB
);
JSON Structure for Changes
Each change is recorded as a JSON object:
{
"Operation": "update",
"impact": [
{
"Table": "Instruments",
"PrimaryKey": {"ProjectID": 4, "InstrumentID": 2},
"Column": "Name",
"OldValue": "Old Instrument Name",
"NewValue": "Updated Instrument Name"
}
]
}
CSharp to apply changes given an operation
To apply changes recorded in the JSON, we'll use C# code that parses the JSON and executes the corresponding SQL commands.
I assume you have the _connectionString
available somewhere as a constant in the code.
using Oracle.ManagedDataAccess.Client;
using Newtonsoft.Json.Linq;
using System;
using System.Collections.Generic;
public class ChangeApplier
{
public void ApplyChanges(string jsonInput)
{
// Parse the JSON input
var operation = JObject.Parse(jsonInput);
string opType = operation["Operation"].ToString();
var impactList = (JArray)operation["impact"];
using (var conn = new OracleConnection(_connectionString))
{
conn.Open();
using (var transaction = conn.BeginTransaction())
{
try
{
foreach (var impact in impactList)
{
string table = impact["Table"].ToString();
var primaryKey = (JObject)impact["PrimaryKey"];
string column = impact["Column"]?.ToString();
string newValue = impact["NewValue"]?.ToString();
switch (opType)
{
case "update":
ApplyUpdate(conn, table, primaryKey, column, newValue);
break;
case "insert":
ApplyInsert(conn, table, impact);
break;
case "delete":
ApplyDelete(conn, table, primaryKey);
break;
}
}
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
Console.WriteLine($"Error applying changes: {ex.Message}");
}
}
}
}
private void ApplyUpdate(OracleConnection conn, string table, JObject primaryKey, string column, string newValue)
{
var pkConditions = BuildPrimaryKeyCondition(primaryKey);
var query = $"UPDATE {table} SET {column} = :newValue WHERE {pkConditions}";
using (var cmd = new OracleCommand(query, conn))
{
cmd.Parameters.Add(new OracleParameter("newValue", newValue));
cmd.ExecuteNonQuery();
}
}
private void ApplyInsert(OracleConnection conn, string table, JToken impact)
{
var primaryKey = (JObject)impact["PrimaryKey"];
var newValues = (JObject)impact["NewValues"];
var columns = new List<string>();
var values = new List<string>();
foreach (var property in primaryKey.Properties())
{
columns.Add(property.Name);
values.Add($":{property.Name}");
}
foreach (var property in newValues.Properties())
{
columns.Add(property.Name);
values.Add($":{property.Name}");
}
var query = $"INSERT INTO {table} ({string.Join(", ", columns)}) VALUES ({string.Join(", ", values)})";
using (var cmd = new OracleCommand(query, conn))
{
foreach (var property in primaryKey.Properties())
{
cmd.Parameters.Add(new OracleParameter(property.Name, property.Value.ToString()));
}
foreach (var property in newValues.Properties())
{
cmd.Parameters.Add(new OracleParameter(property.Name, property.Value.ToString()));
}
cmd.ExecuteNonQuery();
}
}
private void ApplyDelete(OracleConnection conn, string table, JObject primaryKey)
{
var pkConditions = BuildPrimaryKeyCondition(primaryKey);
var query = $"DELETE FROM {table} WHERE {pkConditions}";
using (var cmd = new OracleCommand(query, conn))
{
cmd.ExecuteNonQuery();
}
}
private string BuildPrimaryKeyCondition(JObject primaryKey)
{
var conditions = new List<string>();
foreach (var prop in primaryKey.Properties())
{
conditions.Add($"{prop.Name} = :{prop.Name}");
}
return string.Join(" AND ", conditions);
}
}
- ApplyChanges: Parses the JSON input and determines the operation type.
- ApplyUpdate: Executes an UPDATE SQL command using parameters to prevent SQL injection.
- ApplyInsert: Executes an INSERT SQL command, constructing columns and values from the JSON.
- ApplyDelete: Executes a DELETE SQL command based on the primary key. BuildPrimaryKeyCondition: Constructs the WHERE clause for SQL commands.
A side note, for the insert, you'll have the challenge if you are using auto-incremented IDs, this will mean you don't know the new IDs until you have inserted the data, so you should make sure to capture the new IDs and then create the audit log. This is left as a simple exercise to the reader in case it is necessary.
CSharp to revert changes
To revert changes (undo operations), we'll process the audit trail in reverse order. Here I give the processing of a list of operations as an example of unrolling. It is to note that the reverse delete does only one table, so if there was some connected information that was deleted via referential identity, it was the task of the audit table to keep that in the audit.
public class ChangeReverter
{
public void RevertChanges(List<string> jsonOperations)
{
using (var conn = new OracleConnection(_connectionString))
{
conn.Open();
using (var transaction = conn.BeginTransaction())
{
try
{
jsonOperations.Reverse(); // note: you could also have provided sorted by last time from the audit table instead of reversing them
foreach (var operationJson in jsonOperations)
{
var operation = JObject.Parse(operationJson);
string opType = operation["Operation"].ToString();
var impactList = (JArray)operation["impact"];
foreach (var impact in impactList)
{
string table = impact["Table"].ToString();
var primaryKey = (JObject)impact["PrimaryKey"];
string column = impact["Column"]?.ToString();
string oldValue = impact["OldValue"]?.ToString();
switch (opType)
{
case "update":
RevertUpdate(conn, table, primaryKey, column, oldValue);
break;
case "insert":
ApplyDelete(conn, table, primaryKey);
break;
case "delete":
RevertDelete(conn, table, impact);
break;
}
}
}
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
Console.WriteLine($"Error reverting changes: {ex.Message}");
}
}
}
}
private void RevertUpdate(OracleConnection conn, string table, JObject primaryKey, string column, string oldValue)
{
var pkConditions = BuildPrimaryKeyCondition(primaryKey);
var query = $"UPDATE {table} SET {column} = :oldValue WHERE {pkConditions}";
using (var cmd = new OracleCommand(query, conn))
{
cmd.Parameters.Add(new OracleParameter("oldValue", oldValue));
cmd.ExecuteNonQuery();
}
}
private void RevertDelete(OracleConnection conn, string table, JToken impact)
{
var primaryKey = (JObject)impact["PrimaryKey"];
var oldValues = (JObject)impact["OldValues"];
var columns = new List<string>();
var values = new List<string>();
foreach (var property in primaryKey.Properties())
{
columns.Add(property.Name);
values.Add($":{property.Name}");
}
foreach (var property in oldValues.Properties())
{
columns.Add(property.Name);
values.Add($":{property.Name}");
}
var query = $"INSERT INTO {table} ({string.Join(", ", columns)}) VALUES ({string.Join(", ", values)})";
using (var cmd = new OracleCommand(query, conn))
{
foreach (var property in primaryKey.Properties())
{
cmd.Parameters.Add(new OracleParameter(property.Name, property.Value.ToString()));
}
foreach (var property in oldValues.Properties())
{
cmd.Parameters.Add(new OracleParameter(property.Name, property.Value.ToString()));
}
cmd.ExecuteNonQuery();
}
}
// Reuse BuildPrimaryKeyCondition and ApplyDelete methods from ChangeApplier
}
- RevertChanges: Processes the list of JSON operations in reverse order to undo changes.
- RevertUpdate: Sets the column back to its old value.
- RevertDelete: Re-inserts a deleted row using the old values stored in the audit trail.
- ApplyDelete: Deletes a row, used here to undo an insert operation.
JSON schema
The reason that I prefer to use the Json directly in the C# code is that actually making up the C# classes for this schema is actually more work that processing the json directly in the code.
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "ImpactJsonRoot",
"type": "object",
"properties": {
"Operation": {
"type": "string",
"enum": ["update", "insert", "delete"],
"description": "Type of operation"
},
"Impact": {
"type": "array",
"items": {
"type": "object",
"properties": {
"Table": {
"type": "string",
"description": "Name of the table affected"
},
"PrimaryKey": {
"type": "object",
"description": "Primary key fields and their values",
"additionalProperties": {
"type": ["number", "null"]
}
},
"Column": {
"type": "string",
"description": "Column affected (for updates)"
},
"OldValue": {
"type": ["string", "number", "boolean", "null"],
"description": "Previous value (for updates and deletes)"
},
"NewValue": {
"type": ["string", "number", "boolean", "null"],
"description": "New value (for updates and inserts)"
},
"OldValues": {
"type": "object",
"description": "All old values (for deletes)",
"additionalProperties": {
"type": ["string", "number", "boolean", "null"]
}
},
"NewValues": {
"type": "object",
"description": "All new values (for inserts)",
"additionalProperties": {
"type": ["string", "number", "boolean", "null"]
}
}
},
"required": ["Table", "PrimaryKey"]
}
}
},
"required": ["Operation", "Impact"]
}
and here are examples of operations:
update
{
"Operation": "update",
"Impact": [
{
"Table": "Instruments",
"PrimaryKey": { "ProjectID": 4, "InstrumentID": 2 },
"Column": "Name",
"OldValue": "Old Instrument Name",
"NewValue": "Updated Instrument Name"
}
]
}
insert
{
"Operation": "insert",
"Impact": [
{
"Table": "Instruments",
"PrimaryKey": { "ProjectID": 4, "InstrumentID": 10 },
"NewValues": {
"Name": "New Instrument",
"Type": "Flexible Asset",
"LastUpdated": "2024-10-05T12:34:56Z"
}
}
]
}
delete
{
"Operation": "delete",
"Impact": [
{
"Table": "Instruments",
"PrimaryKey": { "ProjectID": 4, "InstrumentID": 5 },
"OldValues": {
"Name": "Obsolete Instrument",
"Type": "Flexible Asset",
"LastUpdated": "2024-10-01T09:15:00Z"
}
}
]
}
Note: OpenAI's o1-preview
was used to assist in the creation of the post.
VSCode Snippets
Posted on Thursday, 18 July 2024
I use Visual Studio Code to write markdown notes of my meetings. I datetime every meeting to be able to find it afterwards. One of the shorthands I use is 'dt'. I always forget how to set it up when I reinstall my vscode, so here's the instructions from github copilot chat, which explained how to set it up again!
So now I know I can ask Copilot how to do it, but I still want to document it, so here it is:
Github Copilot Instructions
To recreate the shorthand command in Visual Studio Code for inserting the current date and time in a Markdown file using ctrl-space
, you can use a snippet. Here's how to set it up:
- Open the Command Palette with
Ctrl+Shift+P
. - Type
Configure User Snippets
and select it. - Choose
markdown.json
for Markdown files. - Insert the following snippet configuration:
"Insert Current Date and Time": {
"prefix": "dt",
"body": [
"### $CURRENT_YEAR.$CURRENT_MONTH.$CURRENT_DATE $CURRENT_HOUR:$CURRENT_MINUTE:$CURRENT_SECOND"
],
"description": "Inserts the current date and time"
}
- Save the
markdown.json
file.
Now, when you type dt
in a Markdown file and press Ctrl+Space
, it should suggest your snippet. Selecting it will insert the current date and time formatted as you described.
Adding LaTeX Maths to Writebook
Posted on Saturday, 13 July 2024
How to add LaTeX maths to Writebook
I've started using ONCE Writebook to host some markdown documents, including my book on Optimization applied to energy products. But that book contains tons of Markdown LaTeX formatted mathematics, which Writebook does not support at this time.
So I patched support for it into the docker image.
Copy out the app layout file
Assuming your docker image is named writebook.
sudo docker cp writebook:/rails/app/views/layouts/application.html.erb .
Modify the application erb to add support
In the Head Section add
<script type="text/javascript">
window.MathJax = {
tex: {
inlineMath: [['$', '$']],
displayMath: [['$$', '$$']],
processEscapes: true
}
};
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/3.2.2/es5/tex-mml-chtml.js"></script>
Thing is, it doesn't render the first time round, so I have to refresh the page to render, but it's not too much of an issue right now, I actually like to see the LaTeX maths code before I see the rendered version.
Save and copy the file back
Then finally copy the file back into the docker image and restart the image
sudo docker cp application.html.erb writebook:/rails/app/views/layouts/application.html.erb
sudo docker restart writebook
Receiving compressed data from an http(s) endpoint
Posted on Wednesday, 20 March 2024
With the amount of data that we are passing around across services, it is often beneficial to use compression on the data to reduce the transmission time. Modern platforms and algorithms are now very efficient at compressing regular data, particularly if that data is text or json data.
If the developer of the endpoint has prepared their service for compression, the client must still indicate that they are ready to receive the compressed data. Luckily, most implementations of modern http clients in R, Python, JavaScript and Dotnet support compression / decompression and are seamless for the client. This means that you can set the compression headers on and simply benefit from compressed data being received.
We can also check in the Content-Encoding header which compression was used. I've found that example.com is sending responses compressed with gzip.
Python
import requests
url = "http://example.com" # Replace with the actual URL you want to request
# Specify the accepted encoding methods in the headers
headers = {
'Accept-Encoding': 'gzip, br',
}
response = requests.get(url, headers=headers)
print(response.text)
# In case you want to see if it was compressed, you can check via the headers
#if 'Content-Encoding' in response.headers:
# print(f"Response was compressed using: {response.headers['Content-Encoding']}")
#else:
# print("Response was not compressed.")
R
library(httr)
# The URL to which you're sending the request
url <- "http://example.com"
# Setting the Accept-Encoding header
response <- GET(url, add_headers(`Accept-Encoding` = 'gzip, br'))
# The content of the response will be automatically decompressed by httr, so you can access it directly.
content(response, "text")
C#
In C#, for some ungodly strange reason, the standard HTTP endpoint doesn't decompress for you automatically unless you add a decompression handler - see handler HttpClientHandler.AutomaticDecompression
using System;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Threading.Tasks;
using System.Text;
class Program
{
static async Task Main(string[] args)
{
HttpClientHandler handler = new HttpClientHandler();
handler.AutomaticDecompression = System.Net.DecompressionMethods.GZip; //Adding automatic Decompression means that the accept headers are added automatically
using (var client = new HttpClient(handler))
{
string url = "http://example.com";
HttpResponseMessage response = await client.GetAsync(url);
response.EnsureSuccessStatusCode();
Console.WriteLine(await response.Content.ReadAsStringAsync());
}
}
}
JavaScript
it is so easy that you don't even need to do anything else than setting the gzip: true for the support
const request = require('request');
const requestOptions = {
url: 'http://example.com',
gzip: true, // This is all that is required
};
request(requestOptions, (error, response, body) => {
// Handle the response here
});