TUnit Event Receiver Data Inconsistencies: A Deep Dive
Hey guys! I've been wrestling with some inconsistencies in TUnit's attribute-based event receivers, and I'm hoping to get some clarity. I'm trying to create real-time test status reporting for a CI system that doesn't natively support TUnit. Using the event receivers, I want to capture test suite starts and finishes (for each assembly and class), and individual test statuses (pass, fail, skip) along with details like duration and exceptions. The main issue is that some events aren't firing as expected, or they're missing crucial data. Let's dig into it and see what's happening.
The Core Problem: Inconsistent Event Triggering and Data
My primary concern revolves around the reliability of the event receivers, specifically the OnLastTestIn{Assembly,Class}
and OnTestSkipped
methods. The core of my problem is that crucial events aren't being triggered consistently. This makes it difficult to provide accurate, real-time progress reports in a CI environment. The lack of assemblyDone
and classDone
events means I can't easily signal the completion of a test suite to my CI system. Similarly, the missing OnTestSkipped
events and inconsistent reporting of skip reasons lead to incomplete or misleading data. This can make the CI reporting inaccurate.
Let's break down the specific issues I'm seeing:
-
OnLastTestIn{Assembly,Class}
are never called. This is a big one. When I run my tests, I see theassemblyStart
andclassStart
events firing as expected, but the correspondingassemblyDone
andclassDone
events never trigger. This means I can't reliably determine when a test suite or a class of tests has finished. For CI systems, this lack of end markers makes it hard to track overall test progress accurately. Without these signals, I'm left with incomplete information, which undermines the goal of real-time reporting. -
OnTestSkipped
is missing in action. Even though some tests are skipped, theOnTestSkipped
event handler doesn't get called. This is a critical piece of information I need to provide in the CI report, because it doesn't give me the right information. Skipped tests are a valid outcome, and I need to report them appropriately. -
Skipped test reasons are inconsistent. This is the most complex issue: When a test is skipped using
Skip.Test
, the reason isn't always reported correctly. Even whenOnTestEnd
does fire, thectx.SkipReason
is often missing or incomplete. This makes it challenging to tell why a test was skipped. When working in CI, knowing why a test was skipped is very helpful for debugging. The way that skip reasons are handled within TUnit seems to be quite inconsistent, which is causing headaches.
Reproducing the Issues: A Step-by-Step Guide
To really understand what's happening, I've created a simple test case. Here's how you can reproduce the problems I'm seeing. This will help us isolate the issues and get a clearer picture of what's going on.
Setting Up the Environment
First, we need to make sure we've got the right tools and setup. You'll need the .NET SDK installed on your system. Here's a quick rundown of the steps.
-
Check Your .NET Version: Make sure you have a compatible .NET SDK installed. The example was tested with
10.0.100-rc.2.25502.107
, but any recent version should work. Open your terminal or command prompt and rundotnet --version
to verify. -
Create a New Console Project: Navigate to a directory where you want to work, and create a new console application using
dotnet new console
. For example:mkdir repro && cd repro dotnet new console
-
Add TUnit Package: Add the TUnit NuGet package to your project with the command
dotnet add package TUnit@0.72.0
. This command adds the necessary TUnit files to your project.
The Code (Program.cs)
Replace the contents of Program.cs
with the following code. This code defines a set of tests, some of which will pass, fail, and be skipped. It also includes an attribute-based event receiver to capture test events. This setup helps us observe the behavior of the event handlers.
using TUnit.Core.Interfaces;
[assembly: ReproReporter]
class T1 {
[Test] public void T1Pass() {}
[Test] public void T1Fail() => throw null!;
[Test] public void T1Skip() { Skip.Test("Reason1"); }
}
class T2 {
[Test, Arguments(1), Arguments(2, Skip = "Reason2")] public void T2Cases(int i) {}
[Test, Skip("Reason3")] public void T2Skip() {}
}
[AttributeUsage(AttributeTargets.Assembly)]
class ReproReporterAttribute : Attribute, ITestStartEventReceiver, ITestEndEventReceiver, ITestSkippedEventReceiver,
IFirstTestInAssemblyEventReceiver, ILastTestInAssemblyEventReceiver,
IFirstTestInClassEventReceiver, ILastTestInClassEventReceiver
{
public ValueTask OnTestStart(TestContext ctx) => Log({{content}}quot;testStart {ctx.GetDisplayName()}");
public ValueTask OnTestSkipped(TestContext ctx) => Log({{content}}quot;testSkipped {ctx.GetDisplayName()} ({ctx.SkipReason})");
public ValueTask OnTestEnd(TestContext ctx) => ctx.Result?.State switch {
TestState.Failed => Log({{content}}quot;testDone {ctx.GetDisplayName()} (Failed={ctx.Result?.Exception?.Message})"),
TestState.Skipped => Log({{content}}quot;testDone {ctx.GetDisplayName()} (Skipped={ctx.SkipReason})"),
_ => Log({{content}}quot;testDone {ctx.GetDisplayName()} ({ctx.Result?.State})")
};
public ValueTask OnFirstTestInAssembly(AssemblyHookContext asm, TestContext ctx)
=> Log({{content}}quot;assemblyStart {asm.Assembly.GetName().Name}");
public ValueTask OnLastTestInAssembly(AssemblyHookContext asm, TestContext ctx)
=> Log({{content}}quot;assemblyDone {asm.Assembly.GetName().Name}");
public ValueTask OnFirstTestInClass(ClassHookContext cls, TestContext ctx)
=> Log({{content}}quot;classStart {cls.ClassType.FullName}");
public ValueTask OnLastTestInClass(ClassHookContext cls, TestContext ctx)
=> Log({{content}}quot;classDone {cls.ClassType.FullName}");
static ValueTask Log(string message) {
GlobalContext.Current.OriginalConsoleOut.WriteLine({{content}}quot;##report[{message}]");
return ValueTask.CompletedTask;
}
}
Running the Tests
Now, build and run your project with dotnet run
. This command compiles and executes the test code. This will execute the tests, triggering the event receivers we defined in the ReproReporterAttribute
class. These event receivers log messages to the console.
> dotnet run
Observing the Output
After running the tests, observe the output. You should see a series of messages prefixed with ##report
. These messages are generated by the event receivers, indicating the events that were triggered. Pay close attention to what events are being called and what data is being reported. This will help you find the problem.
Compare the actual output with the expected output, considering the issues I mentioned earlier. You should notice the missing assemblyDone
, classDone
, and the erratic behavior of OnTestSkipped
and the skip reasons.
Analyzing the Results
By comparing the actual and expected output, you can see the discrepancies. For example, OnLastTestInAssembly
and OnLastTestInClass
are never called, and OnTestSkipped
events are missing in some scenarios. The skip reasons are also being lost in some cases. This helps point out where the event handling isn't working as expected.
##report[assemblyStart repro]
##report[classStart T1]
##report[testStart T1Pass]
##report[testDone T1Pass (Passed)]
##report[testStart T1Fail]
##report[testDone T1Fail (Failed=Object reference not set to an instance of an object.)]
##report[testStart T1Skip]
##report[testDone T1Skip (Skipped=)]
##report[classStart T2]
##report[testStart T2Cases(1)]
##report[testDone T2Cases(1) (Passed)]
failed T1Fail (0ms)
TUnit.Engine.Exceptions.TestFailedException: NullReferenceException: Object reference not set to an instance of an object.
at T1.T1Fail() in Program.cs:7
skipped T1Skip (0ms)
Skipped
skipped T2Cases(2) (0ms)
Reason2
skipped T2Skip (0ms)
Reason3
Test run summary: Failed! - bin/Debug/net10.0/repro.dll (net10.0|arm64)
total: 6
failed: 1
succeeded: 2
skipped: 3
Deep Dive into the Observed Issues
Let's take a closer look at the problems we're seeing.
-
OnLastTestIn{Assembly,Class}
not being triggered: This is a major gap. TheassemblyDone
andclassDone
events are critical for signaling the end of a test suite or class. Without these, it's difficult to report the full progress of a test run to a CI system. The fact that they don't fire suggests a potential issue in how TUnit handles the finalization of tests. Possible causes could be incorrect event registration, or that the event is not being fired. -
OnTestSkipped
is never called: The absence ofOnTestSkipped
events is a serious issue. When a test is skipped, this event should be triggered to give you useful info. This means you can't accurately reflect skipped tests in your CI reports. The fact that the events don't trigger when the tests are actually skipped means there's something wrong with the event dispatching mechanism. It could be that the event is not correctly bound to the skip process, or there might be some filtering that prevents the event from being fired in these circumstances. -
Inconsistent skip reasons: The varying handling of skip reasons is confusing. Sometimes, the
ctx.SkipReason
is correctly captured, other times it's missing or incomplete. This inconsistency makes it hard to rely on the data provided for debugging or reporting purposes. The loss of skip reasons also breaks down the reporting of why certain tests were skipped. The issue might be related to how skip reasons are stored or retrieved within TUnit, or it could be a timing problem where the reason isn't available when the event is triggered.
Potential Solutions and Workarounds
So, what can we do? Let's talk about some workarounds and ways to potentially solve these issues.
Potential Bugs or Missing Features?
It's important to determine if these are bugs or features that are missing. If these are bugs, then the goal is to make a report to the TUnit maintainers. If it's the latter, then we need to see what's currently available in the system.
Workarounds
-
Alternative Event Handling: If the attribute-based approach isn't working as expected, you might have to explore other approaches. Consider creating a custom test runner or using a different integration method. This is a bit more work, but may provide better control over the events and data. You would have to implement the necessary logic to capture and report test results, which can take a lot of time.
-
Post-Processing the Output: If the event receivers are unreliable, another approach is to parse the test output to extract the needed information. If you're using a CI system, you can set up a script that parses the test results and generates a report. However, this is more error-prone, but it may be the easiest method to do. This option gives you the information you need in a usable format.
-
Investigate the TUnit Code: If you have the time and skills, you could explore the TUnit source code. This might help identify why the events aren't firing or why the data is inconsistent. You might even find a way to patch or fix the problems, if possible.
Missing Features
If these are missing features, the best option is to create a feature request or discussion with the TUnit maintainers. This is a great way to push the changes that you want to see. Maybe we can help implement these missing pieces of functionality. This can take time. So, if we can't implement the features as intended, we can see if there is another approach.
Conclusion: Seeking Clarity and a Path Forward
I'm hoping to get some clarity on these issues. Are these potential bugs, or is there a better way to achieve real-time progress reporting in CI with TUnit? If there are any missing features or bugs, I will make a formal report to the TUnit maintainers. The ability to accurately report test progress is very important, and hopefully, we can fix this. Any help or suggestions would be greatly appreciated. Thanks, guys!