How we create (real) UnitTests for Unreal Engine with GoogleMock

Von Ihno Lübbers

8. April 2021

As a software consulting firm, we often work in demanding projects, e.g., from the automotive industry, which need a high degree of stability and maintainability. Specifically, in the department I work in (Mobile and X-Reality), we are doing lots of high polished mobile development on the one hand. On the other hand, we are also using Unity Engine a lot for nearly all kinds of 3D-related topics. We are accustomed to using mature testing frameworks to test most of our codebases. These frameworks usually provide:

  • Rich feature sets like a mocking of dependencies.
  • Setting expectations on these mocks.
  • Allowing Matchers in the best case.

With the plugin for the Unreal Engine that we are currently developing, we tried to achieve a similar quality level as in other projects from an architectural standpoint and also regarding clean code and test coverage. I will discuss some of the approaches we tried to solve this problem and why they fell short of our expectations.

After trying to write our tests using only what Unreal provides and then trying to use the GoogleTest framework purely, we are now happy with a setup including GoogleMock combined with the Unreal Automation framework - only a Sith deals in absolutes. The first part of this article gives more general information about our setup. In the second part of the article, specific implementation details will be explained, e.g., about the usage of adapters and facades.

Yeah, a test framework

We were happy to discover a test framework integrated into the Unreal Engine, which is also accessible via command line - the Unreal Automation Framework. Imagine our disappointment when we realized that we could only run our tests but had no possibility to mock our dependencies. Unfortunately, the framework lacks this feature entirely and seems not prepared for creating real UnitTests, but only aims for integration tests.

Naturally, the first thing we did was a search on how other companies approach this problem. After this, our impression was that mocking and trying to achieve a high level of code coverage with tests is not so common in the Unreal universe - or at least that other companies are not telling us about it. Nevertheless, we found an exciting blog post from Eric Lemes, which tackles our problem exactly and solves it using the GoogleTest framework instead of Unreal Automation. The proposed solution allows mocking and executes the tests in milliseconds, much faster than with Unreal Automation. The increase in speed comes from running the tests in their own executable instead of using the Unreal Engine to run them.

We integrated the described setup into our project as a prototype. The central problem of using GoogleTest instead of Unreal’s default framework is that your code must not contain any dependencies to Unreal modules other than “Core.” Otherwise, they would be built together with the tests, which seems not to work. Luckily, our project was already well prepared for being built and tested with GoogleTest, as most of our business logic is in separate modules which do not depend on the Unreal universe.

We quickly had a working test environment that built and ran some basic tests on our methods. Nevertheless, we also had a bunch of issues with the proposed setup:

  1. We could not test modules of our project which depend on more than the unreal core module.
    • Integration Tests were naturally not possible.
  2. Our IDE (Rider for Unreal) did not support GoogleTest at that time.
    • No debugging of tests.
    • No code completion/highlighting.
    • No direct execution of (single) tests from inside the IDE.
    • Visual Studio seems to be no better.
  3. The setup was quite complex and challenging to explain to our customer.

Especially point 2 drove us away from using that specific setup with GoogleTest. Also, we were planning to create integration tests, UI Tests, and blueprint tests in the future which would definitely not work without the Unreal Engine in the test pipeline. So at that point, we took a step back and thought about our problem in the first place, which was not a lack of a test framework but only the lack of mocking.

That led us to the conclusion to combine only GoogleMock (not GoogleTest) with the Unreal Automation Framework, thus getting the best of both worlds. It allowed us to create real UnitTests with mocked dependencies, but the tests could still be comfortably executed from inside the Unreal Editor. Future automation or UI Tests can easily be integrated.

Our current setup

  • GoogleTest as a plugin initially taken from Nans Pellicaris Git Repository. We made a small change to Google's code which I will explain in the next paragraph.
  • A new module inside our project contains all our tests and has the GoogleTest plugin as dependency.
  • A bunch of facades and adapters between our code and the Unreal dependencies, for being able to test classes and methods that are closely coupled with Unreal.
  • One hundred percent test coverage for all codes that are decoupled from Unreal dependencies.

Small adaptation to the GoogleMock code


Update: The adaptation is not needed anymore

Read below this part for an other (better) way of achieving Logs to Unreal.




After creating our tests and mocks with a combination of Unreal Automation and GoogleMock, we had one small but essential issue left. Our EXPECT calls on mocks were evaluated, but their output was logged only to std::out and not to anything Unreal-related. This filled our logs with failed expectations; however, the tests inside of Unreal showed green unless the assertions did not fail. Our first try to solve this issue was to forward the std: out to Unreal somehow, but we could not get this solution to work. So the following approach was to change the GoogleMock code just a bit so that it logs directly to Unreal, which did the trick!

In the following snipped from, we moved just the failure_reporter to public so that we could access it from outside, and we added a nullptr check: (there also is an open pull request for this)

  1. // original failure reporter
  2. class GoogleTestFailureReporter : public FailureReporterInterface {
  3.  public:
  4.   void ReportFailure(
  5.     ...
  6.     // unchanged
  7.     ...
  8.   }
  9. };
  11. // "setter" for custom failure reporter
  12. static FailureReporterInterface* failure_reporter = nullptr;
  14. GTEST_API_ FailureReporterInterface* GetFailureReporter() {
  15.   if(failure_reporter == nullptr) {
  16.     failure_reporter = new GoogleTestFailureReporter();
  17.   }
  18.   return failure_reporter;
  19. }

This small change enabled us to replace the GoogleTestFailureReporter with our own Implementation and adapt the logging behavior:


  1. class TestHelpersFailureReporter : public testing::internal::FailureReporterInterface {
  2. public:
  3.         void ReportFailure(FailureType type, const char* file, int line,
  4.                        const std::string& message) override {          
  6.                 const auto Message = FString(message.c_str());
  7.                 UE_LOG(LogTemp, Error, TEXT("%s"), *Message);
  8.         }
  9. };
  11. ...
  12. // set custom failure reporter
  13. ::testing::internal::failure_reporter = new TestHelpersFailureReporter();

Our own implementation of the FailureReporterInterface allowed us to log directly to Unreal's error log. As these logs are counted as errors from the Unreal side, they let the tests fail automatically. The results are printed as a detailed log inside of Unreal, showing us what went wrong. This also works fine with our Jenkins CI.

Alternative (better) solution:

There is a way to achieve Logging to Unreal without changing the GoogleMock Source-Code. This way is shown in the following Code Snippets:

  1. class TestHelpersFailureReporter : public testing::EmptyTestEventListener
  2. {
  3.         void OnTestPartResult(const testing::TestPartResult& result) override
  4.         {
  5.                 if (result.type() == testing::TestPartResult::kFatalFailure
  6.                         || result.type() == testing::TestPartResult::kNonFatalFailure)
  7.                 {
  8.                         const auto Message = FString(result.message());
  9.                         UE_LOG(LogTemp, Error, TEXT("%s"), *Message);
  10.                 }
  11.         }
  12. };
  14. void FTestModule::StartupModule()
  15. {
  16.         UE_LOG(LogTemp, Log, TEXT("TestModule has started!"));
  18.         ::testing::TestEventListeners& Listeners = ::testing::UnitTest::GetInstance()->listeners();
  19.         Listeners.Append(new TestHelpersFailureReporter());
  20. }

Facades and Adapters

When testing code that is tightly coupled to Unreal services and classes or uses UObjects, AActors, and so on, writing UnitTests gets quite challenging. The problem with this is that you naturally cannot mock most of these classes, as they usually don’t use interfaces, or even worse, are based on generated code, like UObjects. We could mostly overcome this challenge by using facades and adapters between our code and Unreal code.

How these adapters work is explained in detail in Part 3 of Eric Lemes' blog post.

The facades' concept has many similarities to adapters. The big difference is that facades actually do stuff while adapters mainly only forward the actual method call to the real implementation - probably with some nullptr/Valid checks. Facades' central purpose is not primarily to enable mocking but to create a clean and easy-to-use interface for accessing one or multiple existing APIs or services. In our project, we created a facade like this to access Unreal's Variant Manager.

The Variant Manager is used to create different Variants of, e.g., models inside of Unreal and then activate or deactivate them. This gives the user the possibility to quickly switch between different models of a car, for example. As the Variant Manager is relatively new to Unreal, using it still feels a bit "beta-ish." Also, looking at the source code gave us the impression that there are probably still some changes to come. Being ready for future API changes seemed to be another good reason for creating a facade for the Variant Manager.

How we mock

The remarkable thing about GoogleMock is not only that it works very well, but that it is nicely documented and that there are many published usage examples, e.g., on Stackoverflow. The first source of information on GoogleMock is usually Google's "gMock for Dummies" and the "gMock Cookbook;" both can be found here.

One behavior of mocks I want to point out here is that for the expectations to be evaluated, the mocks must be created "new" for each individual test and "deleted" afterward. Finding this was very important to us, as we try to create as many clear expectations as possible for our Mocks.

I will now detail the setup for a single test and add examples.

Phases of a single test


  • Creates or fetches an instance of the class to test
  • Initializes all class dependencies with "default" Mocks


  • Tests a single-core behavior of a method
  • Has usually several Expectations on Mock method calls


  • Deletes and resets all dependencies created in Setup
  • On deletion of the Mocks, their Expectations are evaluated


[ ActivateVariant() - Method ]

  1. void FVariantSetDetailInteractor::ActivateVariant(const ActivateVariant::FRequest& Request)
  2. {
  3.   ICustomVariantSet* SelectedVariantSet = GetVariantManagerService()->GetVariantSetByName(GetSceneDatastore()->GetSelectedVariantSetName());
  4.   if (SelectedVariantSet != nullptr)
  5.   {
  6.      const auto VariantSet =  GetVariantManagerService()->FetchVariantByName(SelectedVariantSet, Request.VariantName);
  7.      if (VariantSet
  8.          && GetVariantManagerService()->ActivateVariant(SelectedVariantSet, VariantSet)
  9.          == EVariantManagerServiceResult::Success)
  10.      {
  11.         ListVariants();
  12.         return;
  13.      }
  14.   }
  15.   GetPresenter()->PresentError(ShowError::FResponse{"ActivateVariant failed"});
  16. }

[ ActivateVariant() - Test ]

  1. IMPLEMENT_SIMPLE_AUTOMATION_TEST(ActivateVariant_CallsActivateVariant,
  2.   "CustomVariantManager.4Application.VariantSetDetailInteractor.ActivateVariant.CallsActivateVariant", DEFAULT_TEST_FLAGS)
  4. bool ActivateVariant_CallsActivateVariant::RunTest(const FString& Parameters)
  5. {
  6.   Setup();
  8.   EXPECT_CALL(*Mock->MockVariantManagerService, ActivateVariant)
  9.    .Times(Exactly(1))
  10.    .WillRepeatedly(Return(EVariantManagerServiceResult::Success));
  12.   EXPECT_CALL(*Mock->MockVariantManagerService, FetchVariantByName(Mock->DefaultVariantSet, Mock->FindableVariantName))
  13.    .Times(Exactly(1))
  14.    .WillRepeatedly(Return(Mock->DefaultVariantWithName));
  16.   VerifyPresentListVariantsCalled(1);
  17.   VerifyPresentErrorCalled(0);
  19.   GeometryVariantSetDetailInteractor->ActivateVariant(ActivateVariant::FRequest{Mock->FindableVariantName});
  21.   Teardown();
  22.   return true;
  23. }

The above test simply checks if the correct mocks were called with the expected parameters; it verifies the right outcome and ensures that no error has occurred. As we have no return value for “ActivateVariant,” we can only test the correct behavior inside of the method by evaluating the expectations on Mock-Objects.

[ CreateCameraVariant() - Method]

  1. ICameraVariant* FStandardVariantManagerService::CreateCameraVariant(ICameraVariantSet* CameraVariantSet, UCameraComponent* CameraComponent)
  2. {
  3.         IEpicVariantSet* EpicVariantSet = dynamic_cast<IEpicVariantSet*>(CameraVariantSet);
  4.         if (!CameraComponent
  5.             || !EpicVariantSet)
  6.                 return nullptr;
  8.         ICameraVariant* CameraVariant = GetEpicVariantManagerFacade()->CreateCameraVariant(EpicVariantSet, CameraComponent);
  9.         if (RefreshCameraVariant(dynamic_cast<IEpicVariant*>(CameraVariant)))
  10.                 return CameraVariant;
  12.         return nullptr;
  13. }
  15. bool FStandardVariantManagerService::RefreshCameraVariant(IEpicVariant* EpicVariant)
  16. {
  17.         if (!EpicVariant)
  18.                 return false;
  20.         const TArray<UObject*> BoundedObjects = EpicVariant->GetBoundedObjects();
  21.         if (BoundedObjects.Num() != 1)
  22.                 return false;
  24.         TArray<TSharedPtr<FCapturableProperty>> AllCapturableProperties;
  25.         GetEpicVariantManagerFacade()->CaptureProperties(BoundedObjects, AllCapturableProperties, "", false);
  27.         const TArray<TSharedPtr<FCapturableProperty>> FilteredProperties = AllCapturableProperties.FilterByPredicate(
  28.                 [](TSharedPtr<FCapturableProperty> LoopProperty)
  29.                 {
  30.                         return ICameraVariant::GetCameraPropertyNames().Contains(LoopProperty->DisplayName);
  31.                 });
  33.         const TArray<UVariantObjectBinding*> Bindings = GetEpicVariantManagerFacade()->CreateObjectBindingsOfObjects(BoundedObjects, {EpicVariant});
  34.         if (Bindings.Num() == 0
  35.             || FilteredProperties.Num() == 0)
  36.                 return false;
  38.         GetEpicVariantManagerFacade()->CreatePropertyCaptures(FilteredProperties, Bindings, true);
  39.         return true;
  40. }

These methods already have extensive usage of Adapters and Facades. We adapted the "UVariant" of the Variant Manager with an "IEpicVariant" and did the same for UVariantSet. The methods also use a Facade to avoid talking directly to the Variant Manager - "GetEpicVariantManagerFacade()." Currently, all methods that are remotely Variant Manager-related are inside of this "EpicVariantManagerFacade." Later on, we may split that Facade into multiple smaller ones with unique purposes.

[ CreateCameraVariant() - Test ]

  1. // Random pointer
  2. UCameraComponent* FakeCameraComponent = reinterpret_cast<UCameraComponent*>(0x28ff44);
  3. TArray<UObject*> FakeUObjects = {reinterpret_cast<UObject*>(0x000001)};
  4. TArray<UVariantObjectBinding*> FakeObjectBindings = {reinterpret_cast<UVariantObjectBinding*>(0x000003), reinterpret_cast<UVariantObjectBinding*>(0x000004)};
  6. IMPLEMENT_SIMPLE_AUTOMATION_TEST(CreateCameraVariant_CallsCreateCameraVariant_ReturnsVariant,
  7.         "CustomVariantManager.2External.VariantManagerService.CreateCameraVariant.CallsCreateCameraVariant_ReturnsVariant", DEFAULT_TEST_FLAGS)
  9. bool CreateCameraVariant_CallsCreateCameraVariant_ReturnsVariant::RunTest(const FString& Parameters)
  10. {
  11.         Setup();
  12.         const TArray<TSharedPtr<FCapturableProperty>> CapturableProperties = CreateDefaultCapturableProperties();
  14.         EXPECT_CALL(*MockVariantManagerFacade, CreateCameraVariant(EpicCameraVariantSet, _))
  15.     .Times(Exactly(1))
  16.     .WillRepeatedly(Return(DefaultCameraVariant));
  18.         EXPECT_CALL(*EpicCameraVariant, GetBoundedObjects())
  19.         .Times(Exactly(1))
  20.         .WillRepeatedly(Return(FakeUObjects));
  22.         EXPECT_CALL(*MockVariantManagerFacade, CaptureProperties(_, _, _, _))
  23.         .Times(Exactly(1))
  24.         .WillRepeatedly(SetArgReferee<1>(CapturableProperties));
  26.         EXPECT_CALL(*MockVariantManagerFacade, CreateObjectBindingsOfObjects(FakeUObjects, _))
  27.         .Times(Exactly(1))
  28.     .WillRepeatedly(Return(FakeObjectBindings));
  30.         EXPECT_CALL(*MockVariantManagerFacade, CreatePropertyCaptures(
  31.                 TestUtils::ArraysHaveWantedSize<TSharedPtr<FCapturableProperty>>(ICameraVariant::GetCameraPropertyNames().Num()), FakeObjectBindings, true))
  32.     .Times(Exactly(1));
  34.         TestEqual(TEXT("Should be equal"),
  35.                 VariantManagerService->CreateCameraVariant(DefaultCameraVariantSet, FakeCameraComponent), static_cast<ICameraVariant*>(DefaultCameraVariant));
  37.         Teardown();
  38.         return true;
  39. }

This method's tests mainly test the correct behavior of the internally used private one called "RefreshCameraVariant." As this method is much more complex, the test is also a bit more complicated.

The interesting part of this test is that it did not want to mock away all dependencies to UObjects but wanted to make the method testable anyways. We achieved this by using “Fake-UObjects,” which are only pointers to random memory addresses that are luckily never evaluated, as they are only used by mock objects:

  1. // Random pointer
  2. UCameraComponent* FakeCameraComponent = reinterpret_cast<UCameraComponent*>(0x28ff44);
  3. TArray<UObject*> FakeUObjects = {reinterpret_cast<UObject*>(0x000001)};
  4. TArray<UVariantObjectBinding*> FakeObjectBindings = {reinterpret_cast<UVariantObjectBinding*>(0x000003), reinterpret_cast<UVariantObjectBinding*>(0x000004)};

Another interesting part is that we use the power of GoogleMock to fill the “OutParameter” of a method with the wanted values (CapturableProperties):

  1. EXPECT_CALL(*MockVariantManagerFacade, CaptureProperties(_, _, _, _))
  2. .Times(Exactly(1))
  3. .WillRepeatedly(SetArgReferee<1>(CapturableProperties));

Finally, this test is also using a matcher for checking if an array used as a parameter has the correct size:

  1. EXPECT_CALL(*MockVariantManagerFacade, CreatePropertyCaptures(
  2.   TestUtils::ArraysHaveWantedSize<TSharedPtr<FCapturableProperty>>(ICameraVariant::GetCameraPropertyNames().Num()), FakeObjectBindings, true))
  3.   .Times(Exactly(1));


You can find a minimum working example for using Unreal together with GoogleMock on GitHub. The Unreal Project in the repository can be started and contains around 30 tests that can be executed and are hopefully all green. The implementation of the “ArraysHaveWantedSize” - Matcher can be found there as well. 

Next steps

With our UnitTest setup working fine, we are currently working on integration tests and UI Tests. In terms of UI, we already created some screenshot tests that can compare a given screenshot with one created during test execution. Regarding integration tests, we are currently trying to develop them primarily using blueprints, as our test engineer is not speaking C++ fluently. It seems to be quite promising, though, and will maybe be a topic for a future article.

We are also currently updating our CI/CD by adding a GPU to our setup. It is needed, as running the Unreal Editor for tests needs an accessible GPU. As our Jenkins is running on AWS, this should hopefully not be a big problem.