Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • c.trageser/bornagain
  • mlz/bornagain
2 results
Show changes
Commits on Source (259)
Showing
with 19 additions and 4382 deletions
......@@ -18,7 +18,8 @@ set(${library_name}_LIBRARY ${library_name} PARENT_SCOPE)
if(NOT WIN32)
target_compile_options(${library_name}
PRIVATE -Wno-deprecated-declarations -Wno-implicit-fallthrough)
PRIVATE -Wno-deprecated-declarations -Wno-implicit-fallthrough
-Wno-deprecated-enum-enum-conversion)
endif()
target_link_libraries(${library_name}
......
[24apr18]
from website https://github.com/google/googletest/releases
downloaded https://github.com/google/googletest/archive/release-1.8.0.tar.gz
unpacking yields googletest-release-1.8.0
we only use the subdirectory googletest
moved it to ThirdParty/common/gtest/gtest-1.8.0
removed subdirectory samples
[2oct20]
from website https://github.com/google/googletest/releases
download https://github.com/google/googletest/archive/v1.10.x.tar.gz
unpacking yields googletest-release-1.10.0
Due to changes in the cmake machinery, the full folder is used, but googlemock is not needed and removed
move the whole thing (minus googlemock) to gtest-1.10.0
removed subdirectory samples
add modified options to gtest/CMakesLists.txt
[11dec20]
from website https://github.com/google/googletest/releases
download https://github.com/google/googletest/archive/v1.10.x.tar.gz
unpacking yields googletest-release-1.10.0
Due to changes in the cmake machinery, the full folder is used
move the whole thing to gtest-1.10.0
removed subdirectory samples
add modified options to gtest/CMakesLists.txt
[10feb22]
from website https://github.com/google/googletest/releases/tag/release-1.11.0
download googletest-release-1.11.0.zip
unpacking yields googletest-release-1.10.0
the full folder is used
move the whole folder to gtest-1.11.0
slightly modified gtest/CMakesLists.txt by adding a status message
[2may23]
- download tgz source archive from website https://github.com/google/googletest/releases/
- unpack out-of-source
- remove gtest-<oldversion>, create gtest-<version>
- update gtest/CMakeLists.txt
- from unpacked gtest source, copy CMakeLists.txt and directory googletest to gtest/gtest-<version>.
############################################################################
# CMakeLists.txt file for building gtest library
############################################################################
# instructs CMake to consider libgtest.so as our project library (inspite of the fact, that
# it is not installed) and to provide @rpath instead of hardcoded links.
# BornAgain/3rdparty/common/gtest
#
# Our settings for Google Test
if(APPLE)
# provide @rpath instead of hardcoded links
set(CMAKE_MACOSX_RPATH 1)
endif()
option(BUILD_SHARED_LIBS "Build shared libraries (DLLs)." ON)
option(BUILD_GMOCK "" ON)
option(BUILD_GMOCK "" OFF)
option(INSTALL_GTEST "" OFF)
add_subdirectory(gtest-1.11.0)
message(STATUS "Using Google's C++ test framework, GoogleTest, gtest-1.11.0")
set(GTEST_VERSION 1.13.0)
message(STATUS "Configure Google Test ${GTEST_VERSION} ...")
add_subdirectory(gtest-${GTEST_VERSION})
message(STATUS "... Google Test configured")
# Copyright 2017 Google Inc.
# All Rights Reserved.
#
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Bazel Build for Google C++ Testing Framework(Google Test)
load("@rules_cc//cc:defs.bzl", "cc_library", "cc_test")
package(default_visibility = ["//visibility:public"])
licenses(["notice"])
exports_files(["LICENSE"])
config_setting(
name = "windows",
constraint_values = ["@platforms//os:windows"],
)
config_setting(
name = "msvc_compiler",
flag_values = {
"@bazel_tools//tools/cpp:compiler": "msvc-cl",
},
visibility = [":__subpackages__"],
)
config_setting(
name = "has_absl",
values = {"define": "absl=1"},
)
# Library that defines the FRIEND_TEST macro.
cc_library(
name = "gtest_prod",
hdrs = ["googletest/include/gtest/gtest_prod.h"],
includes = ["googletest/include"],
)
# Google Test including Google Mock
cc_library(
name = "gtest",
srcs = glob(
include = [
"googletest/src/*.cc",
"googletest/src/*.h",
"googletest/include/gtest/**/*.h",
"googlemock/src/*.cc",
"googlemock/include/gmock/**/*.h",
],
exclude = [
"googletest/src/gtest-all.cc",
"googletest/src/gtest_main.cc",
"googlemock/src/gmock-all.cc",
"googlemock/src/gmock_main.cc",
],
),
hdrs = glob([
"googletest/include/gtest/*.h",
"googlemock/include/gmock/*.h",
]),
copts = select({
":windows": [],
"//conditions:default": ["-pthread"],
}),
defines = select({
":has_absl": ["GTEST_HAS_ABSL=1"],
"//conditions:default": [],
}),
features = select({
":windows": ["windows_export_all_symbols"],
"//conditions:default": [],
}),
includes = [
"googlemock",
"googlemock/include",
"googletest",
"googletest/include",
],
linkopts = select({
":windows": [],
"//conditions:default": ["-pthread"],
}),
deps = select({
":has_absl": [
"@com_google_absl//absl/debugging:failure_signal_handler",
"@com_google_absl//absl/debugging:stacktrace",
"@com_google_absl//absl/debugging:symbolize",
"@com_google_absl//absl/strings",
"@com_google_absl//absl/types:any",
"@com_google_absl//absl/types:optional",
"@com_google_absl//absl/types:variant",
],
"//conditions:default": [],
}),
)
cc_library(
name = "gtest_main",
srcs = ["googlemock/src/gmock_main.cc"],
features = select({
":windows": ["windows_export_all_symbols"],
"//conditions:default": [],
}),
deps = [":gtest"],
)
# The following rules build samples of how to use gTest.
cc_library(
name = "gtest_sample_lib",
srcs = [
"googletest/samples/sample1.cc",
"googletest/samples/sample2.cc",
"googletest/samples/sample4.cc",
],
hdrs = [
"googletest/samples/prime_tables.h",
"googletest/samples/sample1.h",
"googletest/samples/sample2.h",
"googletest/samples/sample3-inl.h",
"googletest/samples/sample4.h",
],
features = select({
":windows": ["windows_export_all_symbols"],
"//conditions:default": [],
}),
)
cc_test(
name = "gtest_samples",
size = "small",
# All Samples except:
# sample9 (main)
# sample10 (main and takes a command line option and needs to be separate)
srcs = [
"googletest/samples/sample1_unittest.cc",
"googletest/samples/sample2_unittest.cc",
"googletest/samples/sample3_unittest.cc",
"googletest/samples/sample4_unittest.cc",
"googletest/samples/sample5_unittest.cc",
"googletest/samples/sample6_unittest.cc",
"googletest/samples/sample7_unittest.cc",
"googletest/samples/sample8_unittest.cc",
],
linkstatic = 0,
deps = [
"gtest_sample_lib",
":gtest_main",
],
)
cc_test(
name = "sample9_unittest",
size = "small",
srcs = ["googletest/samples/sample9_unittest.cc"],
deps = [":gtest"],
)
cc_test(
name = "sample10_unittest",
size = "small",
srcs = ["googletest/samples/sample10_unittest.cc"],
deps = [":gtest"],
)
# How to become a contributor and submit your own code
## Contributor License Agreements
We'd love to accept your patches! Before we can take them, we have to jump a
couple of legal hurdles.
Please fill out either the individual or corporate Contributor License Agreement
(CLA).
* If you are an individual writing original source code and you're sure you
own the intellectual property, then you'll need to sign an
[individual CLA](https://developers.google.com/open-source/cla/individual).
* If you work for a company that wants to allow you to contribute your work,
then you'll need to sign a
[corporate CLA](https://developers.google.com/open-source/cla/corporate).
Follow either of the two links above to access the appropriate CLA and
instructions for how to sign and return it. Once we receive it, we'll be able to
accept your pull requests.
## Are you a Googler?
If you are a Googler, please make an attempt to submit an internal change rather
than a GitHub Pull Request. If you are not able to submit an internal change a
PR is acceptable as an alternative.
## Contributing A Patch
1. Submit an issue describing your proposed change to the
[issue tracker](https://github.com/google/googletest/issues).
2. Please don't mix more than one logical change per submittal, because it
makes the history hard to follow. If you want to make a change that doesn't
have a corresponding issue in the issue tracker, please create one.
3. Also, coordinate with team members that are listed on the issue in question.
This ensures that work isn't being duplicated and communicating your plan
early also generally leads to better patches.
4. If your proposed change is accepted, and you haven't already done so, sign a
Contributor License Agreement (see details above).
5. Fork the desired repo, develop and test your code changes.
6. Ensure that your code adheres to the existing style in the sample to which
you are contributing.
7. Ensure that your code has an appropriate set of unit tests which all pass.
8. Submit a pull request.
## The Google Test and Google Mock Communities
The Google Test community exists primarily through the
[discussion group](http://groups.google.com/group/googletestframework) and the
GitHub repository. Likewise, the Google Mock community exists primarily through
their own [discussion group](http://groups.google.com/group/googlemock). You are
definitely encouraged to contribute to the discussion and you can also help us
to keep the effectiveness of the group high by following and promoting the
guidelines listed here.
### Please Be Friendly
Showing courtesy and respect to others is a vital part of the Google culture,
and we strongly encourage everyone participating in Google Test development to
join us in accepting nothing less. Of course, being courteous is not the same as
failing to constructively disagree with each other, but it does mean that we
should be respectful of each other when enumerating the 42 technical reasons
that a particular proposal may not be the best choice. There's never a reason to
be antagonistic or dismissive toward anyone who is sincerely trying to
contribute to a discussion.
Sure, C++ testing is serious business and all that, but it's also a lot of fun.
Let's keep it that way. Let's strive to be one of the friendliest communities in
all of open source.
As always, discuss Google Test in the official GoogleTest discussion group. You
don't have to actually submit code in order to sign up. Your participation
itself is a valuable contribution.
## Style
To keep the source consistent, readable, diffable and easy to merge, we use a
fairly rigid coding style, as defined by the
[google-styleguide](https://github.com/google/styleguide) project. All patches
will be expected to conform to the style outlined
[here](https://google.github.io/styleguide/cppguide.html). Use
[.clang-format](https://github.com/google/googletest/blob/master/.clang-format)
to check your formatting.
## Requirements for Contributors
If you plan to contribute a patch, you need to build Google Test, Google Mock,
and their own tests from a git checkout, which has further requirements:
* [Python](https://www.python.org/) v2.3 or newer (for running some of the
tests and re-generating certain source files from templates)
* [CMake](https://cmake.org/) v2.8.12 or newer
## Developing Google Test and Google Mock
This section discusses how to make your own changes to the Google Test project.
### Testing Google Test and Google Mock Themselves
To make sure your changes work as intended and don't break existing
functionality, you'll want to compile and run Google Test and GoogleMock's own
tests. For that you can use CMake:
mkdir mybuild
cd mybuild
cmake -Dgtest_build_tests=ON -Dgmock_build_tests=ON ${GTEST_REPO_DIR}
To choose between building only Google Test or Google Mock, you may modify your
cmake command to be one of each
cmake -Dgtest_build_tests=ON ${GTEST_DIR} # sets up Google Test tests
cmake -Dgmock_build_tests=ON ${GMOCK_DIR} # sets up Google Mock tests
Make sure you have Python installed, as some of Google Test's tests are written
in Python. If the cmake command complains about not being able to find Python
(`Could NOT find PythonInterp (missing: PYTHON_EXECUTABLE)`), try telling it
explicitly where your Python executable can be found:
cmake -DPYTHON_EXECUTABLE=path/to/python ...
Next, you can build Google Test and / or Google Mock and all desired tests. On
\*nix, this is usually done by
make
To run the tests, do
make test
All tests should pass.
# This file contains a list of people who've made non-trivial
# contribution to the Google C++ Testing Framework project. People
# who commit code to the project are encouraged to add their names
# here. Please keep the list sorted by first names.
Ajay Joshi <jaj@google.com>
Balázs Dán <balazs.dan@gmail.com>
Benoit Sigoure <tsuna@google.com>
Bharat Mediratta <bharat@menalto.com>
Bogdan Piloca <boo@google.com>
Chandler Carruth <chandlerc@google.com>
Chris Prince <cprince@google.com>
Chris Taylor <taylorc@google.com>
Dan Egnor <egnor@google.com>
Dave MacLachlan <dmaclach@gmail.com>
David Anderson <danderson@google.com>
Dean Sturtevant
Eric Roman <eroman@chromium.org>
Gene Volovich <gv@cite.com>
Hady Zalek <hady.zalek@gmail.com>
Hal Burch <gmock@hburch.com>
Jeffrey Yasskin <jyasskin@google.com>
Jim Keller <jimkeller@google.com>
Joe Walnes <joe@truemesh.com>
Jon Wray <jwray@google.com>
Jói Sigurðsson <joi@google.com>
Keir Mierle <mierle@gmail.com>
Keith Ray <keith.ray@gmail.com>
Kenton Varda <kenton@google.com>
Kostya Serebryany <kcc@google.com>
Krystian Kuzniarek <krystian.kuzniarek@gmail.com>
Lev Makhlis
Manuel Klimek <klimek@google.com>
Mario Tanev <radix@google.com>
Mark Paskin
Markus Heule <markus.heule@gmail.com>
Matthew Simmons <simmonmt@acm.org>
Mika Raento <mikie@iki.fi>
Mike Bland <mbland@google.com>
Miklós Fazekas <mfazekas@szemafor.com>
Neal Norwitz <nnorwitz@gmail.com>
Nermin Ozkiranartli <nermin@google.com>
Owen Carlsen <ocarlsen@google.com>
Paneendra Ba <paneendra@google.com>
Pasi Valminen <pasi.valminen@gmail.com>
Patrick Hanna <phanna@google.com>
Patrick Riley <pfr@google.com>
Paul Menage <menage@google.com>
Peter Kaminski <piotrk@google.com>
Piotr Kaminski <piotrk@google.com>
Preston Jackson <preston.a.jackson@gmail.com>
Rainer Klaffenboeck <rainer.klaffenboeck@dynatrace.com>
Russ Cox <rsc@google.com>
Russ Rufer <russ@pentad.com>
Sean Mcafee <eefacm@gmail.com>
Sigurður Ásgeirsson <siggi@google.com>
Sverre Sundsdal <sundsdal@gmail.com>
Takeshi Yoshino <tyoshino@google.com>
Tracy Bialik <tracy@pentad.com>
Vadim Berman <vadimb@google.com>
Vlad Losev <vladl@google.com>
Wolfgang Klier <wklier@google.com>
Zhanyong Wan <wan@google.com>
Copyright 2008, Google Inc.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# GoogleTest
### Announcements
#### Live at Head
GoogleTest now follows the
[Abseil Live at Head philosophy](https://abseil.io/about/philosophy#upgrade-support).
We recommend using the latest commit in the `master` branch in your projects.
#### Documentation Updates
Our documentation is now live on GitHub Pages at
https://google.github.io/googletest/. We recommend browsing the documentation on
GitHub Pages rather than directly in the repository.
#### Release 1.10.x
[Release 1.10.x](https://github.com/google/googletest/releases/tag/release-1.10.0)
is now available.
#### Coming Soon
* We are planning to take a dependency on
[Abseil](https://github.com/abseil/abseil-cpp).
* More documentation improvements are planned.
## Welcome to **GoogleTest**, Google's C++ test framework!
This repository is a merger of the formerly separate GoogleTest and GoogleMock
projects. These were so closely related that it makes sense to maintain and
release them together.
### Getting Started
See the [GoogleTest User's Guide](https://google.github.io/googletest/) for
documentation. We recommend starting with the
[GoogleTest Primer](https://google.github.io/googletest/primer.html).
More information about building GoogleTest can be found at
[googletest/README.md](googletest/README.md).
## Features
* An [xUnit](https://en.wikipedia.org/wiki/XUnit) test framework.
* Test discovery.
* A rich set of assertions.
* User-defined assertions.
* Death tests.
* Fatal and non-fatal failures.
* Value-parameterized tests.
* Type-parameterized tests.
* Various options for running the tests.
* XML test report generation.
## Supported Platforms
GoogleTest requires a codebase and compiler compliant with the C++11 standard or
newer.
The GoogleTest code is officially supported on the following platforms.
Operating systems or tools not listed below are community-supported. For
community-supported platforms, patches that do not complicate the code may be
considered.
If you notice any problems on your platform, please file an issue on the
[GoogleTest GitHub Issue Tracker](https://github.com/google/googletest/issues).
Pull requests containing fixes are welcome!
### Operating Systems
* Linux
* macOS
* Windows
### Compilers
* gcc 5.0+
* clang 5.0+
* MSVC 2015+
**macOS users:** Xcode 9.3+ provides clang 5.0+.
### Build Systems
* [Bazel](https://bazel.build/)
* [CMake](https://cmake.org/)
**Note:** Bazel is the build system used by the team internally and in tests.
CMake is supported on a best-effort basis and by the community.
## Who Is Using GoogleTest?
In addition to many internal projects at Google, GoogleTest is also used by the
following notable projects:
* The [Chromium projects](http://www.chromium.org/) (behind the Chrome browser
and Chrome OS).
* The [LLVM](http://llvm.org/) compiler.
* [Protocol Buffers](https://github.com/google/protobuf), Google's data
interchange format.
* The [OpenCV](http://opencv.org/) computer vision library.
## Related Open Source Projects
[GTest Runner](https://github.com/nholthaus/gtest-runner) is a Qt5 based
automated test-runner and Graphical User Interface with powerful features for
Windows and Linux platforms.
[GoogleTest UI](https://github.com/ospector/gtest-gbar) is a test runner that
runs your test binary, allows you to track its progress via a progress bar, and
displays a list of test failures. Clicking on one shows failure text. Google
Test UI is written in C#.
[GTest TAP Listener](https://github.com/kinow/gtest-tap-listener) is an event
listener for GoogleTest that implements the
[TAP protocol](https://en.wikipedia.org/wiki/Test_Anything_Protocol) for test
result output. If your test runner understands TAP, you may find it useful.
[gtest-parallel](https://github.com/google/gtest-parallel) is a test runner that
runs tests from your binary in parallel to provide significant speed-up.
[GoogleTest Adapter](https://marketplace.visualstudio.com/items?itemName=DavidSchuldenfrei.gtest-adapter)
is a VS Code extension allowing to view GoogleTest in a tree view, and run/debug
your tests.
[C++ TestMate](https://github.com/matepek/vscode-catch2-test-adapter) is a VS
Code extension allowing to view GoogleTest in a tree view, and run/debug your
tests.
[Cornichon](https://pypi.org/project/cornichon/) is a small Gherkin DSL parser
that generates stub code for GoogleTest.
## Contributing Changes
Please read
[`CONTRIBUTING.md`](https://github.com/google/googletest/blob/master/CONTRIBUTING.md)
for details on how to contribute to this project.
Happy testing!
workspace(name = "com_google_googletest")
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "com_google_absl",
urls = ["https://github.com/abseil/abseil-cpp/archive/7971fb358ae376e016d2d4fc9327aad95659b25e.zip"], # 2021-05-20T02:59:16Z
strip_prefix = "abseil-cpp-7971fb358ae376e016d2d4fc9327aad95659b25e",
sha256 = "aeba534f7307e36fe084b452299e49b97420667a8d28102cf9a0daeed340b859",
)
http_archive(
name = "rules_cc",
urls = ["https://github.com/bazelbuild/rules_cc/archive/68cb652a71e7e7e2858c50593e5a9e3b94e5b9a9.zip"], # 2021-05-14T14:51:14Z
strip_prefix = "rules_cc-68cb652a71e7e7e2858c50593e5a9e3b94e5b9a9",
sha256 = "1e19e9a3bc3d4ee91d7fcad00653485ee6c798efbbf9588d40b34cbfbded143d",
)
http_archive(
name = "rules_python",
urls = ["https://github.com/bazelbuild/rules_python/archive/ed6cc8f2c3692a6a7f013ff8bc185ba77eb9b4d2.zip"], # 2021-05-17T00:24:16Z
strip_prefix = "rules_python-ed6cc8f2c3692a6a7f013ff8bc185ba77eb9b4d2",
sha256 = "98b3c592faea9636ac8444bfd9de7f3fb4c60590932d6e6ac5946e3f8dbd5ff6",
)
#!/bin/bash
#
# Copyright 2020, Google Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
set -euox pipefail
readonly LINUX_LATEST_CONTAINER="gcr.io/google.com/absl-177019/linux_hybrid-latest:20210525"
readonly LINUX_GCC_FLOOR_CONTAINER="gcr.io/google.com/absl-177019/linux_gcc-floor:20201015"
if [[ -z ${GTEST_ROOT:-} ]]; then
GTEST_ROOT="$(realpath $(dirname ${0})/..)"
fi
if [[ -z ${STD:-} ]]; then
STD="c++11 c++14 c++17 c++20"
fi
# Test the CMake build
for cc in /usr/local/bin/gcc /opt/llvm/clang/bin/clang; do
for cmake_off_on in OFF ON; do
time docker run \
--volume="${GTEST_ROOT}:/src:ro" \
--tmpfs="/build:exec" \
--workdir="/build" \
--rm \
--env="CC=${cc}" \
--env="CXX_FLAGS=\"-Werror -Wdeprecated\"" \
${LINUX_LATEST_CONTAINER} \
/bin/bash -c "
cmake /src \
-DCMAKE_CXX_STANDARD=11 \
-Dgtest_build_samples=ON \
-Dgtest_build_tests=ON \
-Dgmock_build_tests=ON \
-Dcxx_no_exception=${cmake_off_on} \
-Dcxx_no_rtti=${cmake_off_on} && \
make -j$(nproc) && \
ctest -j$(nproc) --output-on-failure"
done
done
# Do one test with an older version of GCC
time docker run \
--volume="${GTEST_ROOT}:/src:ro" \
--workdir="/src" \
--rm \
--env="CC=/usr/local/bin/gcc" \
${LINUX_GCC_FLOOR_CONTAINER} \
/usr/local/bin/bazel test ... \
--copt="-Wall" \
--copt="-Werror" \
--copt="-Wno-error=pragmas" \
--keep_going \
--show_timestamps \
--test_output=errors
# Test GCC
for std in ${STD}; do
for absl in 0 1; do
time docker run \
--volume="${GTEST_ROOT}:/src:ro" \
--workdir="/src" \
--rm \
--env="CC=/usr/local/bin/gcc" \
--env="BAZEL_CXXOPTS=-std=${std}" \
${LINUX_LATEST_CONTAINER} \
/usr/local/bin/bazel test ... \
--copt="-Wall" \
--copt="-Werror" \
--define="absl=${absl}" \
--distdir="/bazel-distdir" \
--keep_going \
--show_timestamps \
--test_output=errors
done
done
# Test Clang
for std in ${STD}; do
for absl in 0 1; do
time docker run \
--volume="${GTEST_ROOT}:/src:ro" \
--workdir="/src" \
--rm \
--env="CC=/opt/llvm/clang/bin/clang" \
--env="BAZEL_CXXOPTS=-std=${std}" \
${LINUX_LATEST_CONTAINER} \
/usr/local/bin/bazel test ... \
--copt="--gcc-toolchain=/usr/local" \
--copt="-Wall" \
--copt="-Werror" \
--define="absl=${absl}" \
--distdir="/bazel-distdir" \
--keep_going \
--linkopt="--gcc-toolchain=/usr/local" \
--show_timestamps \
--test_output=errors
done
done
#!/bin/bash
#
# Copyright 2020, Google Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
set -euox pipefail
if [[ -z ${GTEST_ROOT:-} ]]; then
GTEST_ROOT="$(realpath $(dirname ${0})/..)"
fi
# Test the CMake build
for cmake_off_on in OFF ON; do
BUILD_DIR=$(mktemp -d build_dir.XXXXXXXX)
cd ${BUILD_DIR}
time cmake ${GTEST_ROOT} \
-DCMAKE_CXX_STANDARD=11 \
-Dgtest_build_samples=ON \
-Dgtest_build_tests=ON \
-Dgmock_build_tests=ON \
-Dcxx_no_exception=${cmake_off_on} \
-Dcxx_no_rtti=${cmake_off_on}
time make
time ctest -j$(nproc) --output-on-failure
done
# Test the Bazel build
# If we are running on Kokoro, check for a versioned Bazel binary.
KOKORO_GFILE_BAZEL_BIN="bazel-3.7.0-darwin-x86_64"
if [[ ${KOKORO_GFILE_DIR:-} ]] && [[ -f ${KOKORO_GFILE_DIR}/${KOKORO_GFILE_BAZEL_BIN} ]]; then
BAZEL_BIN="${KOKORO_GFILE_DIR}/${KOKORO_GFILE_BAZEL_BIN}"
chmod +x ${BAZEL_BIN}
else
BAZEL_BIN="bazel"
fi
cd ${GTEST_ROOT}
for absl in 0 1; do
${BAZEL_BIN} test ... \
--copt="-Wall" \
--copt="-Werror" \
--define="absl=${absl}" \
--keep_going \
--show_timestamps \
--test_output=errors
done
title: GoogleTest
nav:
- section: "Get Started"
items:
- title: "Supported Platforms"
url: "/platforms.html"
- title: "Quickstart: Bazel"
url: "/quickstart-bazel.html"
- title: "Quickstart: CMake"
url: "/quickstart-cmake.html"
- section: "Guides"
items:
- title: "GoogleTest Primer"
url: "/primer.html"
- title: "Advanced Topics"
url: "/advanced.html"
- title: "Mocking for Dummies"
url: "/gmock_for_dummies.html"
- title: "Mocking Cookbook"
url: "/gmock_cook_book.html"
- title: "Mocking Cheat Sheet"
url: "/gmock_cheat_sheet.html"
- section: "References"
items:
- title: "Testing Reference"
url: "/reference/testing.html"
- title: "Mocking Reference"
url: "/reference/mocking.html"
- title: "Assertions"
url: "/reference/assertions.html"
- title: "Matchers"
url: "/reference/matchers.html"
- title: "Actions"
url: "/reference/actions.html"
- title: "Testing FAQ"
url: "/faq.html"
- title: "Mocking FAQ"
url: "/gmock_faq.html"
- title: "Code Samples"
url: "/samples.html"
- title: "Using pkg-config"
url: "/pkgconfig.html"
- title: "Community Documentation"
url: "/community_created_documentation.html"
<!DOCTYPE html>
<html lang="{{ site.lang | default: "en-US" }}">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
{% seo %}
<link rel="stylesheet" href="{{ "/assets/css/style.css?v=" | append: site.github.build_revision | relative_url }}">
<script>
window.ga=window.ga||function(){(ga.q=ga.q||[]).push(arguments)};ga.l=+new Date;
ga('create', 'UA-197576187-1', { 'storage': 'none' });
ga('set', 'referrer', document.referrer.split('?')[0]);
ga('set', 'location', window.location.href.split('?')[0]);
ga('set', 'anonymizeIp', true);
ga('send', 'pageview');
</script>
<script async src='https://www.google-analytics.com/analytics.js'></script>
</head>
<body>
<div class="sidebar">
<div class="header">
<h1><a href="{{ "/" | relative_url }}">{{ site.title | default: "Documentation" }}</a></h1>
</div>
<input type="checkbox" id="nav-toggle" class="nav-toggle">
<label for="nav-toggle" class="expander">
<span class="arrow"></span>
</label>
<nav>
{% for item in site.data.navigation.nav %}
<h2>{{ item.section }}</h2>
<ul>
{% for subitem in item.items %}
<a href="{{subitem.url | relative_url }}">
<li class="{% if subitem.url == page.url %}active{% endif %}">
{{ subitem.title }}
</li>
</a>
{% endfor %}
</ul>
{% endfor %}
</nav>
</div>
<div class="main markdown-body">
<div class="main-inner">
{{ content }}
</div>
<div class="footer">
GoogleTest &middot;
<a href="https://github.com/google/googletest">GitHub Repository</a> &middot;
<a href="https://github.com/google/googletest/blob/master/LICENSE">License</a> &middot;
<a href="https://policies.google.com/privacy">Privacy Policy</a>
</div>
</div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/anchor-js/4.1.0/anchor.min.js" integrity="sha256-lZaRhKri35AyJSypXXs4o6OPFTbTmUoltBbDCbdzegg=" crossorigin="anonymous"></script>
<script>anchors.add('.main h2, .main h3, .main h4, .main h5, .main h6');</script>
</body>
</html>
// Styles for GoogleTest docs website on GitHub Pages.
// Color variables are defined in
// https://github.com/pages-themes/primer/tree/master/_sass/primer-support/lib/variables
$sidebar-width: 260px;
body {
display: flex;
margin: 0;
}
.sidebar {
background: $black;
color: $text-white;
flex-shrink: 0;
height: 100vh;
overflow: auto;
position: sticky;
top: 0;
width: $sidebar-width;
}
.sidebar h1 {
font-size: 1.5em;
}
.sidebar h2 {
color: $gray-light;
font-size: 0.8em;
font-weight: normal;
margin-bottom: 0.8em;
padding-left: 2.5em;
text-transform: uppercase;
}
.sidebar .header {
background: $black;
padding: 2em;
position: sticky;
top: 0;
width: 100%;
}
.sidebar .header a {
color: $text-white;
text-decoration: none;
}
.sidebar .nav-toggle {
display: none;
}
.sidebar .expander {
cursor: pointer;
display: none;
height: 3em;
position: absolute;
right: 1em;
top: 1.5em;
width: 3em;
}
.sidebar .expander .arrow {
border: solid $white;
border-width: 0 3px 3px 0;
display: block;
height: 0.7em;
margin: 1em auto;
transform: rotate(45deg);
transition: transform 0.5s;
width: 0.7em;
}
.sidebar nav {
width: 100%;
}
.sidebar nav ul {
list-style-type: none;
margin-bottom: 1em;
padding: 0;
&:last-child {
margin-bottom: 2em;
}
a {
text-decoration: none;
}
li {
color: $text-white;
padding-left: 2em;
text-decoration: none;
}
li.active {
background: $border-gray-darker;
font-weight: bold;
}
li:hover {
background: $border-gray-darker;
}
}
.main {
background-color: $bg-gray;
width: calc(100% - #{$sidebar-width});
}
.main .main-inner {
background-color: $white;
padding: 2em;
}
.main .footer {
margin: 0;
padding: 2em;
}
.main table th {
text-align: left;
}
.main .callout {
border-left: 0.25em solid $white;
padding: 1em;
a {
text-decoration: underline;
}
&.important {
background-color: $bg-yellow-light;
border-color: $bg-yellow;
color: $black;
}
&.note {
background-color: $bg-blue-light;
border-color: $text-blue;
color: $text-blue;
}
&.tip {
background-color: $green-000;
border-color: $green-700;
color: $green-700;
}
&.warning {
background-color: $red-000;
border-color: $text-red;
color: $text-red;
}
}
.main .good pre {
background-color: $bg-green-light;
}
.main .bad pre {
background-color: $red-000;
}
@media all and (max-width: 768px) {
body {
flex-direction: column;
}
.sidebar {
height: auto;
position: relative;
width: 100%;
}
.sidebar .expander {
display: block;
}
.sidebar nav {
height: 0;
overflow: hidden;
}
.sidebar .nav-toggle:checked {
& ~ nav {
height: auto;
}
& + .expander .arrow {
transform: rotate(-135deg);
}
}
.main {
width: 100%;
}
}
# Advanced googletest Topics
## Introduction
Now that you have read the [googletest Primer](primer.md) and learned how to
write tests using googletest, it's time to learn some new tricks. This document
will show you more assertions as well as how to construct complex failure
messages, propagate fatal failures, reuse and speed up your test fixtures, and
use various flags with your tests.
## More Assertions
This section covers some less frequently used, but still significant,
assertions.
### Explicit Success and Failure
See [Explicit Success and Failure](reference/assertions.md#success-failure) in
the Assertions Reference.
### Exception Assertions
See [Exception Assertions](reference/assertions.md#exceptions) in the Assertions
Reference.
### Predicate Assertions for Better Error Messages
Even though googletest has a rich set of assertions, they can never be complete,
as it's impossible (nor a good idea) to anticipate all scenarios a user might
run into. Therefore, sometimes a user has to use `EXPECT_TRUE()` to check a
complex expression, for lack of a better macro. This has the problem of not
showing you the values of the parts of the expression, making it hard to
understand what went wrong. As a workaround, some users choose to construct the
failure message by themselves, streaming it into `EXPECT_TRUE()`. However, this
is awkward especially when the expression has side-effects or is expensive to
evaluate.
googletest gives you three different options to solve this problem:
#### Using an Existing Boolean Function
If you already have a function or functor that returns `bool` (or a type that
can be implicitly converted to `bool`), you can use it in a *predicate
assertion* to get the function arguments printed for free. See
[`EXPECT_PRED*`](reference/assertions.md#EXPECT_PRED) in the Assertions
Reference for details.
#### Using a Function That Returns an AssertionResult
While `EXPECT_PRED*()` and friends are handy for a quick job, the syntax is not
satisfactory: you have to use different macros for different arities, and it
feels more like Lisp than C++. The `::testing::AssertionResult` class solves
this problem.
An `AssertionResult` object represents the result of an assertion (whether it's
a success or a failure, and an associated message). You can create an
`AssertionResult` using one of these factory functions:
```c++
namespace testing {
// Returns an AssertionResult object to indicate that an assertion has
// succeeded.
AssertionResult AssertionSuccess();
// Returns an AssertionResult object to indicate that an assertion has
// failed.
AssertionResult AssertionFailure();
}
```
You can then use the `<<` operator to stream messages to the `AssertionResult`
object.
To provide more readable messages in Boolean assertions (e.g. `EXPECT_TRUE()`),
write a predicate function that returns `AssertionResult` instead of `bool`. For
example, if you define `IsEven()` as:
```c++
testing::AssertionResult IsEven(int n) {
if ((n % 2) == 0)
return testing::AssertionSuccess();
else
return testing::AssertionFailure() << n << " is odd";
}
```
instead of:
```c++
bool IsEven(int n) {
return (n % 2) == 0;
}
```
the failed assertion `EXPECT_TRUE(IsEven(Fib(4)))` will print:
```none
Value of: IsEven(Fib(4))
Actual: false (3 is odd)
Expected: true
```
instead of a more opaque
```none
Value of: IsEven(Fib(4))
Actual: false
Expected: true
```
If you want informative messages in `EXPECT_FALSE` and `ASSERT_FALSE` as well
(one third of Boolean assertions in the Google code base are negative ones), and
are fine with making the predicate slower in the success case, you can supply a
success message:
```c++
testing::AssertionResult IsEven(int n) {
if ((n % 2) == 0)
return testing::AssertionSuccess() << n << " is even";
else
return testing::AssertionFailure() << n << " is odd";
}
```
Then the statement `EXPECT_FALSE(IsEven(Fib(6)))` will print
```none
Value of: IsEven(Fib(6))
Actual: true (8 is even)
Expected: false
```
#### Using a Predicate-Formatter
If you find the default message generated by
[`EXPECT_PRED*`](reference/assertions.md#EXPECT_PRED) and
[`EXPECT_TRUE`](reference/assertions.md#EXPECT_TRUE) unsatisfactory, or some
arguments to your predicate do not support streaming to `ostream`, you can
instead use *predicate-formatter assertions* to *fully* customize how the
message is formatted. See
[`EXPECT_PRED_FORMAT*`](reference/assertions.md#EXPECT_PRED_FORMAT) in the
Assertions Reference for details.
### Floating-Point Comparison
See [Floating-Point Comparison](reference/assertions.md#floating-point) in the
Assertions Reference.
#### Floating-Point Predicate-Format Functions
Some floating-point operations are useful, but not that often used. In order to
avoid an explosion of new macros, we provide them as predicate-format functions
that can be used in the predicate assertion macro
[`EXPECT_PRED_FORMAT2`](reference/assertions.md#EXPECT_PRED_FORMAT), for
example:
```c++
EXPECT_PRED_FORMAT2(testing::FloatLE, val1, val2);
EXPECT_PRED_FORMAT2(testing::DoubleLE, val1, val2);
```
The above code verifies that `val1` is less than, or approximately equal to,
`val2`.
### Asserting Using gMock Matchers
See [`EXPECT_THAT`](reference/assertions.md#EXPECT_THAT) in the Assertions
Reference.
### More String Assertions
(Please read the [previous](#asserting-using-gmock-matchers) section first if
you haven't.)
You can use the gMock [string matchers](reference/matchers.md#string-matchers)
with [`EXPECT_THAT`](reference/assertions.md#EXPECT_THAT) to do more string
comparison tricks (sub-string, prefix, suffix, regular expression, and etc). For
example,
```c++
using ::testing::HasSubstr;
using ::testing::MatchesRegex;
...
ASSERT_THAT(foo_string, HasSubstr("needle"));
EXPECT_THAT(bar_string, MatchesRegex("\\w*\\d+"));
```
### Windows HRESULT assertions
See [Windows HRESULT Assertions](reference/assertions.md#HRESULT) in the
Assertions Reference.
### Type Assertions
You can call the function
```c++
::testing::StaticAssertTypeEq<T1, T2>();
```
to assert that types `T1` and `T2` are the same. The function does nothing if
the assertion is satisfied. If the types are different, the function call will
fail to compile, the compiler error message will say that
`T1 and T2 are not the same type` and most likely (depending on the compiler)
show you the actual values of `T1` and `T2`. This is mainly useful inside
template code.
**Caveat**: When used inside a member function of a class template or a function
template, `StaticAssertTypeEq<T1, T2>()` is effective only if the function is
instantiated. For example, given:
```c++
template <typename T> class Foo {
public:
void Bar() { testing::StaticAssertTypeEq<int, T>(); }
};
```
the code:
```c++
void Test1() { Foo<bool> foo; }
```
will not generate a compiler error, as `Foo<bool>::Bar()` is never actually
instantiated. Instead, you need:
```c++
void Test2() { Foo<bool> foo; foo.Bar(); }
```
to cause a compiler error.
### Assertion Placement
You can use assertions in any C++ function. In particular, it doesn't have to be
a method of the test fixture class. The one constraint is that assertions that
generate a fatal failure (`FAIL*` and `ASSERT_*`) can only be used in
void-returning functions. This is a consequence of Google's not using
exceptions. By placing it in a non-void function you'll get a confusing compile
error like `"error: void value not ignored as it ought to be"` or `"cannot
initialize return object of type 'bool' with an rvalue of type 'void'"` or
`"error: no viable conversion from 'void' to 'string'"`.
If you need to use fatal assertions in a function that returns non-void, one
option is to make the function return the value in an out parameter instead. For
example, you can rewrite `T2 Foo(T1 x)` to `void Foo(T1 x, T2* result)`. You
need to make sure that `*result` contains some sensible value even when the
function returns prematurely. As the function now returns `void`, you can use
any assertion inside of it.
If changing the function's type is not an option, you should just use assertions
that generate non-fatal failures, such as `ADD_FAILURE*` and `EXPECT_*`.
{: .callout .note}
NOTE: Constructors and destructors are not considered void-returning functions,
according to the C++ language specification, and so you may not use fatal
assertions in them; you'll get a compilation error if you try. Instead, either
call `abort` and crash the entire test executable, or put the fatal assertion in
a `SetUp`/`TearDown` function; see
[constructor/destructor vs. `SetUp`/`TearDown`](faq.md#CtorVsSetUp)
{: .callout .warning}
WARNING: A fatal assertion in a helper function (private void-returning method)
called from a constructor or destructor does not terminate the current test, as
your intuition might suggest: it merely returns from the constructor or
destructor early, possibly leaving your object in a partially-constructed or
partially-destructed state! You almost certainly want to `abort` or use
`SetUp`/`TearDown` instead.
## Skipping test execution
Related to the assertions `SUCCEED()` and `FAIL()`, you can prevent further test
execution at runtime with the `GTEST_SKIP()` macro. This is useful when you need
to check for preconditions of the system under test during runtime and skip
tests in a meaningful way.
`GTEST_SKIP()` can be used in individual test cases or in the `SetUp()` methods
of classes derived from either `::testing::Environment` or `::testing::Test`.
For example:
```c++
TEST(SkipTest, DoesSkip) {
GTEST_SKIP() << "Skipping single test";
EXPECT_EQ(0, 1); // Won't fail; it won't be executed
}
class SkipFixture : public ::testing::Test {
protected:
void SetUp() override {
GTEST_SKIP() << "Skipping all tests for this fixture";
}
};
// Tests for SkipFixture won't be executed.
TEST_F(SkipFixture, SkipsOneTest) {
EXPECT_EQ(5, 7); // Won't fail
}
```
As with assertion macros, you can stream a custom message into `GTEST_SKIP()`.
## Teaching googletest How to Print Your Values
When a test assertion such as `EXPECT_EQ` fails, googletest prints the argument
values to help you debug. It does this using a user-extensible value printer.
This printer knows how to print built-in C++ types, native arrays, STL
containers, and any type that supports the `<<` operator. For other types, it
prints the raw bytes in the value and hopes that you the user can figure it out.
As mentioned earlier, the printer is *extensible*. That means you can teach it
to do a better job at printing your particular type than to dump the bytes. To
do that, define `<<` for your type:
```c++
#include <ostream>
namespace foo {
class Bar { // We want googletest to be able to print instances of this.
...
// Create a free inline friend function.
friend std::ostream& operator<<(std::ostream& os, const Bar& bar) {
return os << bar.DebugString(); // whatever needed to print bar to os
}
};
// If you can't declare the function in the class it's important that the
// << operator is defined in the SAME namespace that defines Bar. C++'s look-up
// rules rely on that.
std::ostream& operator<<(std::ostream& os, const Bar& bar) {
return os << bar.DebugString(); // whatever needed to print bar to os
}
} // namespace foo
```
Sometimes, this might not be an option: your team may consider it bad style to
have a `<<` operator for `Bar`, or `Bar` may already have a `<<` operator that
doesn't do what you want (and you cannot change it). If so, you can instead
define a `PrintTo()` function like this:
```c++
#include <ostream>
namespace foo {
class Bar {
...
friend void PrintTo(const Bar& bar, std::ostream* os) {
*os << bar.DebugString(); // whatever needed to print bar to os
}
};
// If you can't declare the function in the class it's important that PrintTo()
// is defined in the SAME namespace that defines Bar. C++'s look-up rules rely
// on that.
void PrintTo(const Bar& bar, std::ostream* os) {
*os << bar.DebugString(); // whatever needed to print bar to os
}
} // namespace foo
```
If you have defined both `<<` and `PrintTo()`, the latter will be used when
googletest is concerned. This allows you to customize how the value appears in
googletest's output without affecting code that relies on the behavior of its
`<<` operator.
If you want to print a value `x` using googletest's value printer yourself, just
call `::testing::PrintToString(x)`, which returns an `std::string`:
```c++
vector<pair<Bar, int> > bar_ints = GetBarIntVector();
EXPECT_TRUE(IsCorrectBarIntVector(bar_ints))
<< "bar_ints = " << testing::PrintToString(bar_ints);
```
## Death Tests
In many applications, there are assertions that can cause application failure if
a condition is not met. These sanity checks, which ensure that the program is in
a known good state, are there to fail at the earliest possible time after some
program state is corrupted. If the assertion checks the wrong condition, then
the program may proceed in an erroneous state, which could lead to memory
corruption, security holes, or worse. Hence it is vitally important to test that
such assertion statements work as expected.
Since these precondition checks cause the processes to die, we call such tests
_death tests_. More generally, any test that checks that a program terminates
(except by throwing an exception) in an expected fashion is also a death test.
Note that if a piece of code throws an exception, we don't consider it "death"
for the purpose of death tests, as the caller of the code could catch the
exception and avoid the crash. If you want to verify exceptions thrown by your
code, see [Exception Assertions](#ExceptionAssertions).
If you want to test `EXPECT_*()/ASSERT_*()` failures in your test code, see
["Catching" Failures](#catching-failures).
### How to Write a Death Test
GoogleTest provides assertion macros to support death tests. See
[Death Assertions](reference/assertions.md#death) in the Assertions Reference
for details.
To write a death test, simply use one of the macros inside your test function.
For example,
```c++
TEST(MyDeathTest, Foo) {
// This death test uses a compound statement.
ASSERT_DEATH({
int n = 5;
Foo(&n);
}, "Error on line .* of Foo()");
}
TEST(MyDeathTest, NormalExit) {
EXPECT_EXIT(NormalExit(), testing::ExitedWithCode(0), "Success");
}
TEST(MyDeathTest, KillProcess) {
EXPECT_EXIT(KillProcess(), testing::KilledBySignal(SIGKILL),
"Sending myself unblockable signal");
}
```
verifies that:
* calling `Foo(5)` causes the process to die with the given error message,
* calling `NormalExit()` causes the process to print `"Success"` to stderr and
exit with exit code 0, and
* calling `KillProcess()` kills the process with signal `SIGKILL`.
The test function body may contain other assertions and statements as well, if
necessary.
Note that a death test only cares about three things:
1. does `statement` abort or exit the process?
2. (in the case of `ASSERT_EXIT` and `EXPECT_EXIT`) does the exit status
satisfy `predicate`? Or (in the case of `ASSERT_DEATH` and `EXPECT_DEATH`)
is the exit status non-zero? And
3. does the stderr output match `matcher`?
In particular, if `statement` generates an `ASSERT_*` or `EXPECT_*` failure, it
will **not** cause the death test to fail, as googletest assertions don't abort
the process.
### Death Test Naming
{: .callout .important}
IMPORTANT: We strongly recommend you to follow the convention of naming your
**test suite** (not test) `*DeathTest` when it contains a death test, as
demonstrated in the above example. The
[Death Tests And Threads](#death-tests-and-threads) section below explains why.
If a test fixture class is shared by normal tests and death tests, you can use
`using` or `typedef` to introduce an alias for the fixture class and avoid
duplicating its code:
```c++
class FooTest : public testing::Test { ... };
using FooDeathTest = FooTest;
TEST_F(FooTest, DoesThis) {
// normal test
}
TEST_F(FooDeathTest, DoesThat) {
// death test
}
```
### Regular Expression Syntax
On POSIX systems (e.g. Linux, Cygwin, and Mac), googletest uses the
[POSIX extended regular expression](http://www.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap09.html#tag_09_04)
syntax. To learn about this syntax, you may want to read this
[Wikipedia entry](http://en.wikipedia.org/wiki/Regular_expression#POSIX_Extended_Regular_Expressions).
On Windows, googletest uses its own simple regular expression implementation. It
lacks many features. For example, we don't support union (`"x|y"`), grouping
(`"(xy)"`), brackets (`"[xy]"`), and repetition count (`"x{5,7}"`), among
others. Below is what we do support (`A` denotes a literal character, period
(`.`), or a single `\\ ` escape sequence; `x` and `y` denote regular
expressions.):
Expression | Meaning
---------- | --------------------------------------------------------------
`c` | matches any literal character `c`
`\\d` | matches any decimal digit
`\\D` | matches any character that's not a decimal digit
`\\f` | matches `\f`
`\\n` | matches `\n`
`\\r` | matches `\r`
`\\s` | matches any ASCII whitespace, including `\n`
`\\S` | matches any character that's not a whitespace
`\\t` | matches `\t`
`\\v` | matches `\v`
`\\w` | matches any letter, `_`, or decimal digit
`\\W` | matches any character that `\\w` doesn't match
`\\c` | matches any literal character `c`, which must be a punctuation
`.` | matches any single character except `\n`
`A?` | matches 0 or 1 occurrences of `A`
`A*` | matches 0 or many occurrences of `A`
`A+` | matches 1 or many occurrences of `A`
`^` | matches the beginning of a string (not that of each line)
`$` | matches the end of a string (not that of each line)
`xy` | matches `x` followed by `y`
To help you determine which capability is available on your system, googletest
defines macros to govern which regular expression it is using. The macros are:
`GTEST_USES_SIMPLE_RE=1` or `GTEST_USES_POSIX_RE=1`. If you want your death
tests to work in all cases, you can either `#if` on these macros or use the more
limited syntax only.
### How It Works
See [Death Assertions](reference/assertions.md#death) in the Assertions
Reference.
### Death Tests And Threads
The reason for the two death test styles has to do with thread safety. Due to
well-known problems with forking in the presence of threads, death tests should
be run in a single-threaded context. Sometimes, however, it isn't feasible to
arrange that kind of environment. For example, statically-initialized modules
may start threads before main is ever reached. Once threads have been created,
it may be difficult or impossible to clean them up.
googletest has three features intended to raise awareness of threading issues.
1. A warning is emitted if multiple threads are running when a death test is
encountered.
2. Test suites with a name ending in "DeathTest" are run before all other
tests.
3. It uses `clone()` instead of `fork()` to spawn the child process on Linux
(`clone()` is not available on Cygwin and Mac), as `fork()` is more likely
to cause the child to hang when the parent process has multiple threads.
It's perfectly fine to create threads inside a death test statement; they are
executed in a separate process and cannot affect the parent.
### Death Test Styles
The "threadsafe" death test style was introduced in order to help mitigate the
risks of testing in a possibly multithreaded environment. It trades increased
test execution time (potentially dramatically so) for improved thread safety.
The automated testing framework does not set the style flag. You can choose a
particular style of death tests by setting the flag programmatically:
```c++
testing::FLAGS_gtest_death_test_style="threadsafe"
```
You can do this in `main()` to set the style for all death tests in the binary,
or in individual tests. Recall that flags are saved before running each test and
restored afterwards, so you need not do that yourself. For example:
```c++
int main(int argc, char** argv) {
testing::InitGoogleTest(&argc, argv);
testing::FLAGS_gtest_death_test_style = "fast";
return RUN_ALL_TESTS();
}
TEST(MyDeathTest, TestOne) {
testing::FLAGS_gtest_death_test_style = "threadsafe";
// This test is run in the "threadsafe" style:
ASSERT_DEATH(ThisShouldDie(), "");
}
TEST(MyDeathTest, TestTwo) {
// This test is run in the "fast" style:
ASSERT_DEATH(ThisShouldDie(), "");
}
```
### Caveats
The `statement` argument of `ASSERT_EXIT()` can be any valid C++ statement. If
it leaves the current function via a `return` statement or by throwing an
exception, the death test is considered to have failed. Some googletest macros
may return from the current function (e.g. `ASSERT_TRUE()`), so be sure to avoid
them in `statement`.
Since `statement` runs in the child process, any in-memory side effect (e.g.
modifying a variable, releasing memory, etc) it causes will *not* be observable
in the parent process. In particular, if you release memory in a death test,
your program will fail the heap check as the parent process will never see the
memory reclaimed. To solve this problem, you can
1. try not to free memory in a death test;
2. free the memory again in the parent process; or
3. do not use the heap checker in your program.
Due to an implementation detail, you cannot place multiple death test assertions
on the same line; otherwise, compilation will fail with an unobvious error
message.
Despite the improved thread safety afforded by the "threadsafe" style of death
test, thread problems such as deadlock are still possible in the presence of
handlers registered with `pthread_atfork(3)`.
## Using Assertions in Sub-routines
{: .callout .note}
Note: If you want to put a series of test assertions in a subroutine to check
for a complex condition, consider using
[a custom GMock matcher](gmock_cook_book.md#NewMatchers)
instead. This lets you provide a more readable error message in case of failure
and avoid all of the issues described below.
### Adding Traces to Assertions
If a test sub-routine is called from several places, when an assertion inside it
fails, it can be hard to tell which invocation of the sub-routine the failure is
from. You can alleviate this problem using extra logging or custom failure
messages, but that usually clutters up your tests. A better solution is to use
the `SCOPED_TRACE` macro or the `ScopedTrace` utility:
```c++
SCOPED_TRACE(message);
```
```c++
ScopedTrace trace("file_path", line_number, message);
```
where `message` can be anything streamable to `std::ostream`. `SCOPED_TRACE`
macro will cause the current file name, line number, and the given message to be
added in every failure message. `ScopedTrace` accepts explicit file name and
line number in arguments, which is useful for writing test helpers. The effect
will be undone when the control leaves the current lexical scope.
For example,
```c++
10: void Sub1(int n) {
11: EXPECT_EQ(Bar(n), 1);
12: EXPECT_EQ(Bar(n + 1), 2);
13: }
14:
15: TEST(FooTest, Bar) {
16: {
17: SCOPED_TRACE("A"); // This trace point will be included in
18: // every failure in this scope.
19: Sub1(1);
20: }
21: // Now it won't.
22: Sub1(9);
23: }
```
could result in messages like these:
```none
path/to/foo_test.cc:11: Failure
Value of: Bar(n)
Expected: 1
Actual: 2
Google Test trace:
path/to/foo_test.cc:17: A
path/to/foo_test.cc:12: Failure
Value of: Bar(n + 1)
Expected: 2
Actual: 3
```
Without the trace, it would've been difficult to know which invocation of
`Sub1()` the two failures come from respectively. (You could add an extra
message to each assertion in `Sub1()` to indicate the value of `n`, but that's
tedious.)
Some tips on using `SCOPED_TRACE`:
1. With a suitable message, it's often enough to use `SCOPED_TRACE` at the
beginning of a sub-routine, instead of at each call site.
2. When calling sub-routines inside a loop, make the loop iterator part of the
message in `SCOPED_TRACE` such that you can know which iteration the failure
is from.
3. Sometimes the line number of the trace point is enough for identifying the
particular invocation of a sub-routine. In this case, you don't have to
choose a unique message for `SCOPED_TRACE`. You can simply use `""`.
4. You can use `SCOPED_TRACE` in an inner scope when there is one in the outer
scope. In this case, all active trace points will be included in the failure
messages, in reverse order they are encountered.
5. The trace dump is clickable in Emacs - hit `return` on a line number and
you'll be taken to that line in the source file!
### Propagating Fatal Failures
A common pitfall when using `ASSERT_*` and `FAIL*` is not understanding that
when they fail they only abort the _current function_, not the entire test. For
example, the following test will segfault:
```c++
void Subroutine() {
// Generates a fatal failure and aborts the current function.
ASSERT_EQ(1, 2);
// The following won't be executed.
...
}
TEST(FooTest, Bar) {
Subroutine(); // The intended behavior is for the fatal failure
// in Subroutine() to abort the entire test.
// The actual behavior: the function goes on after Subroutine() returns.
int* p = nullptr;
*p = 3; // Segfault!
}
```
To alleviate this, googletest provides three different solutions. You could use
either exceptions, the `(ASSERT|EXPECT)_NO_FATAL_FAILURE` assertions or the
`HasFatalFailure()` function. They are described in the following two
subsections.
#### Asserting on Subroutines with an exception
The following code can turn ASSERT-failure into an exception:
```c++
class ThrowListener : public testing::EmptyTestEventListener {
void OnTestPartResult(const testing::TestPartResult& result) override {
if (result.type() == testing::TestPartResult::kFatalFailure) {
throw testing::AssertionException(result);
}
}
};
int main(int argc, char** argv) {
...
testing::UnitTest::GetInstance()->listeners().Append(new ThrowListener);
return RUN_ALL_TESTS();
}
```
This listener should be added after other listeners if you have any, otherwise
they won't see failed `OnTestPartResult`.
#### Asserting on Subroutines
As shown above, if your test calls a subroutine that has an `ASSERT_*` failure
in it, the test will continue after the subroutine returns. This may not be what
you want.
Often people want fatal failures to propagate like exceptions. For that
googletest offers the following macros:
Fatal assertion | Nonfatal assertion | Verifies
------------------------------------- | ------------------------------------- | --------
`ASSERT_NO_FATAL_FAILURE(statement);` | `EXPECT_NO_FATAL_FAILURE(statement);` | `statement` doesn't generate any new fatal failures in the current thread.
Only failures in the thread that executes the assertion are checked to determine
the result of this type of assertions. If `statement` creates new threads,
failures in these threads are ignored.
Examples:
```c++
ASSERT_NO_FATAL_FAILURE(Foo());
int i;
EXPECT_NO_FATAL_FAILURE({
i = Bar();
});
```
Assertions from multiple threads are currently not supported on Windows.
#### Checking for Failures in the Current Test
`HasFatalFailure()` in the `::testing::Test` class returns `true` if an
assertion in the current test has suffered a fatal failure. This allows
functions to catch fatal failures in a sub-routine and return early.
```c++
class Test {
public:
...
static bool HasFatalFailure();
};
```
The typical usage, which basically simulates the behavior of a thrown exception,
is:
```c++
TEST(FooTest, Bar) {
Subroutine();
// Aborts if Subroutine() had a fatal failure.
if (HasFatalFailure()) return;
// The following won't be executed.
...
}
```
If `HasFatalFailure()` is used outside of `TEST()` , `TEST_F()` , or a test
fixture, you must add the `::testing::Test::` prefix, as in:
```c++
if (testing::Test::HasFatalFailure()) return;
```
Similarly, `HasNonfatalFailure()` returns `true` if the current test has at
least one non-fatal failure, and `HasFailure()` returns `true` if the current
test has at least one failure of either kind.
## Logging Additional Information
In your test code, you can call `RecordProperty("key", value)` to log additional
information, where `value` can be either a string or an `int`. The *last* value
recorded for a key will be emitted to the
[XML output](#generating-an-xml-report) if you specify one. For example, the
test
```c++
TEST_F(WidgetUsageTest, MinAndMaxWidgets) {
RecordProperty("MaximumWidgets", ComputeMaxUsage());
RecordProperty("MinimumWidgets", ComputeMinUsage());
}
```
will output XML like this:
```xml
...
<testcase name="MinAndMaxWidgets" status="run" time="0.006" classname="WidgetUsageTest" MaximumWidgets="12" MinimumWidgets="9" />
...
```
{: .callout .note}
> NOTE:
>
> * `RecordProperty()` is a static member of the `Test` class. Therefore it
> needs to be prefixed with `::testing::Test::` if used outside of the
> `TEST` body and the test fixture class.
> * *`key`* must be a valid XML attribute name, and cannot conflict with the
> ones already used by googletest (`name`, `status`, `time`, `classname`,
> `type_param`, and `value_param`).
> * Calling `RecordProperty()` outside of the lifespan of a test is allowed.
> If it's called outside of a test but between a test suite's
> `SetUpTestSuite()` and `TearDownTestSuite()` methods, it will be
> attributed to the XML element for the test suite. If it's called outside
> of all test suites (e.g. in a test environment), it will be attributed to
> the top-level XML element.
## Sharing Resources Between Tests in the Same Test Suite
googletest creates a new test fixture object for each test in order to make
tests independent and easier to debug. However, sometimes tests use resources
that are expensive to set up, making the one-copy-per-test model prohibitively
expensive.
If the tests don't change the resource, there's no harm in their sharing a
single resource copy. So, in addition to per-test set-up/tear-down, googletest
also supports per-test-suite set-up/tear-down. To use it:
1. In your test fixture class (say `FooTest` ), declare as `static` some member
variables to hold the shared resources.
2. Outside your test fixture class (typically just below it), define those
member variables, optionally giving them initial values.
3. In the same test fixture class, define a `static void SetUpTestSuite()`
function (remember not to spell it as **`SetupTestSuite`** with a small
`u`!) to set up the shared resources and a `static void TearDownTestSuite()`
function to tear them down.
That's it! googletest automatically calls `SetUpTestSuite()` before running the
*first test* in the `FooTest` test suite (i.e. before creating the first
`FooTest` object), and calls `TearDownTestSuite()` after running the *last test*
in it (i.e. after deleting the last `FooTest` object). In between, the tests can
use the shared resources.
Remember that the test order is undefined, so your code can't depend on a test
preceding or following another. Also, the tests must either not modify the state
of any shared resource, or, if they do modify the state, they must restore the
state to its original value before passing control to the next test.
Here's an example of per-test-suite set-up and tear-down:
```c++
class FooTest : public testing::Test {
protected:
// Per-test-suite set-up.
// Called before the first test in this test suite.
// Can be omitted if not needed.
static void SetUpTestSuite() {
shared_resource_ = new ...;
}
// Per-test-suite tear-down.
// Called after the last test in this test suite.
// Can be omitted if not needed.
static void TearDownTestSuite() {
delete shared_resource_;
shared_resource_ = nullptr;
}
// You can define per-test set-up logic as usual.
void SetUp() override { ... }
// You can define per-test tear-down logic as usual.
void TearDown() override { ... }
// Some expensive resource shared by all tests.
static T* shared_resource_;
};
T* FooTest::shared_resource_ = nullptr;
TEST_F(FooTest, Test1) {
... you can refer to shared_resource_ here ...
}
TEST_F(FooTest, Test2) {
... you can refer to shared_resource_ here ...
}
```
{: .callout .note}
NOTE: Though the above code declares `SetUpTestSuite()` protected, it may
sometimes be necessary to declare it public, such as when using it with
`TEST_P`.
## Global Set-Up and Tear-Down
Just as you can do set-up and tear-down at the test level and the test suite
level, you can also do it at the test program level. Here's how.
First, you subclass the `::testing::Environment` class to define a test
environment, which knows how to set-up and tear-down:
```c++
class Environment : public ::testing::Environment {
public:
~Environment() override {}
// Override this to define how to set up the environment.
void SetUp() override {}
// Override this to define how to tear down the environment.
void TearDown() override {}
};
```
Then, you register an instance of your environment class with googletest by
calling the `::testing::AddGlobalTestEnvironment()` function:
```c++
Environment* AddGlobalTestEnvironment(Environment* env);
```
Now, when `RUN_ALL_TESTS()` is called, it first calls the `SetUp()` method of
each environment object, then runs the tests if none of the environments
reported fatal failures and `GTEST_SKIP()` was not called. `RUN_ALL_TESTS()`
always calls `TearDown()` with each environment object, regardless of whether or
not the tests were run.
It's OK to register multiple environment objects. In this suite, their `SetUp()`
will be called in the order they are registered, and their `TearDown()` will be
called in the reverse order.
Note that googletest takes ownership of the registered environment objects.
Therefore **do not delete them** by yourself.
You should call `AddGlobalTestEnvironment()` before `RUN_ALL_TESTS()` is called,
probably in `main()`. If you use `gtest_main`, you need to call this before
`main()` starts for it to take effect. One way to do this is to define a global
variable like this:
```c++
testing::Environment* const foo_env =
testing::AddGlobalTestEnvironment(new FooEnvironment);
```
However, we strongly recommend you to write your own `main()` and call
`AddGlobalTestEnvironment()` there, as relying on initialization of global
variables makes the code harder to read and may cause problems when you register
multiple environments from different translation units and the environments have
dependencies among them (remember that the compiler doesn't guarantee the order
in which global variables from different translation units are initialized).
## Value-Parameterized Tests
*Value-parameterized tests* allow you to test your code with different
parameters without writing multiple copies of the same test. This is useful in a
number of situations, for example:
* You have a piece of code whose behavior is affected by one or more
command-line flags. You want to make sure your code performs correctly for
various values of those flags.
* You want to test different implementations of an OO interface.
* You want to test your code over various inputs (a.k.a. data-driven testing).
This feature is easy to abuse, so please exercise your good sense when doing
it!
### How to Write Value-Parameterized Tests
To write value-parameterized tests, first you should define a fixture class. It
must be derived from both `testing::Test` and `testing::WithParamInterface<T>`
(the latter is a pure interface), where `T` is the type of your parameter
values. For convenience, you can just derive the fixture class from
`testing::TestWithParam<T>`, which itself is derived from both `testing::Test`
and `testing::WithParamInterface<T>`. `T` can be any copyable type. If it's a
raw pointer, you are responsible for managing the lifespan of the pointed
values.
{: .callout .note}
NOTE: If your test fixture defines `SetUpTestSuite()` or `TearDownTestSuite()`
they must be declared **public** rather than **protected** in order to use
`TEST_P`.
```c++
class FooTest :
public testing::TestWithParam<const char*> {
// You can implement all the usual fixture class members here.
// To access the test parameter, call GetParam() from class
// TestWithParam<T>.
};
// Or, when you want to add parameters to a pre-existing fixture class:
class BaseTest : public testing::Test {
...
};
class BarTest : public BaseTest,
public testing::WithParamInterface<const char*> {
...
};
```
Then, use the `TEST_P` macro to define as many test patterns using this fixture
as you want. The `_P` suffix is for "parameterized" or "pattern", whichever you
prefer to think.
```c++
TEST_P(FooTest, DoesBlah) {
// Inside a test, access the test parameter with the GetParam() method
// of the TestWithParam<T> class:
EXPECT_TRUE(foo.Blah(GetParam()));
...
}
TEST_P(FooTest, HasBlahBlah) {
...
}
```
Finally, you can use the `INSTANTIATE_TEST_SUITE_P` macro to instantiate the
test suite with any set of parameters you want. GoogleTest defines a number of
functions for generating test parameters—see details at
[`INSTANTIATE_TEST_SUITE_P`](reference/testing.md#INSTANTIATE_TEST_SUITE_P) in
the Testing Reference.
For example, the following statement will instantiate tests from the `FooTest`
test suite each with parameter values `"meeny"`, `"miny"`, and `"moe"` using the
[`Values`](reference/testing.md#param-generators) parameter generator:
```c++
INSTANTIATE_TEST_SUITE_P(MeenyMinyMoe,
FooTest,
testing::Values("meeny", "miny", "moe"));
```
{: .callout .note}
NOTE: The code above must be placed at global or namespace scope, not at
function scope.
The first argument to `INSTANTIATE_TEST_SUITE_P` is a unique name for the
instantiation of the test suite. The next argument is the name of the test
pattern, and the last is the
[parameter generator](reference/testing.md#param-generators).
You can instantiate a test pattern more than once, so to distinguish different
instances of the pattern, the instantiation name is added as a prefix to the
actual test suite name. Remember to pick unique prefixes for different
instantiations. The tests from the instantiation above will have these names:
* `MeenyMinyMoe/FooTest.DoesBlah/0` for `"meeny"`
* `MeenyMinyMoe/FooTest.DoesBlah/1` for `"miny"`
* `MeenyMinyMoe/FooTest.DoesBlah/2` for `"moe"`
* `MeenyMinyMoe/FooTest.HasBlahBlah/0` for `"meeny"`
* `MeenyMinyMoe/FooTest.HasBlahBlah/1` for `"miny"`
* `MeenyMinyMoe/FooTest.HasBlahBlah/2` for `"moe"`
You can use these names in [`--gtest_filter`](#running-a-subset-of-the-tests).
The following statement will instantiate all tests from `FooTest` again, each
with parameter values `"cat"` and `"dog"` using the
[`ValuesIn`](reference/testing.md#param-generators) parameter generator:
```c++
const char* pets[] = {"cat", "dog"};
INSTANTIATE_TEST_SUITE_P(Pets, FooTest, testing::ValuesIn(pets));
```
The tests from the instantiation above will have these names:
* `Pets/FooTest.DoesBlah/0` for `"cat"`
* `Pets/FooTest.DoesBlah/1` for `"dog"`
* `Pets/FooTest.HasBlahBlah/0` for `"cat"`
* `Pets/FooTest.HasBlahBlah/1` for `"dog"`
Please note that `INSTANTIATE_TEST_SUITE_P` will instantiate *all* tests in the
given test suite, whether their definitions come before or *after* the
`INSTANTIATE_TEST_SUITE_P` statement.
Additionally, by default, every `TEST_P` without a corresponding
`INSTANTIATE_TEST_SUITE_P` causes a failing test in test suite
`GoogleTestVerification`. If you have a test suite where that omission is not an
error, for example it is in a library that may be linked in for other reasons or
where the list of test cases is dynamic and may be empty, then this check can be
suppressed by tagging the test suite:
```c++
GTEST_ALLOW_UNINSTANTIATED_PARAMETERIZED_TEST(FooTest);
```
You can see [sample7_unittest.cc] and [sample8_unittest.cc] for more examples.
[sample7_unittest.cc]: https://github.com/google/googletest/blob/master/googletest/samples/sample7_unittest.cc "Parameterized Test example"
[sample8_unittest.cc]: https://github.com/google/googletest/blob/master/googletest/samples/sample8_unittest.cc "Parameterized Test example with multiple parameters"
### Creating Value-Parameterized Abstract Tests
In the above, we define and instantiate `FooTest` in the *same* source file.
Sometimes you may want to define value-parameterized tests in a library and let
other people instantiate them later. This pattern is known as *abstract tests*.
As an example of its application, when you are designing an interface you can
write a standard suite of abstract tests (perhaps using a factory function as
the test parameter) that all implementations of the interface are expected to
pass. When someone implements the interface, they can instantiate your suite to
get all the interface-conformance tests for free.
To define abstract tests, you should organize your code like this:
1. Put the definition of the parameterized test fixture class (e.g. `FooTest`)
in a header file, say `foo_param_test.h`. Think of this as *declaring* your
abstract tests.
2. Put the `TEST_P` definitions in `foo_param_test.cc`, which includes
`foo_param_test.h`. Think of this as *implementing* your abstract tests.
Once they are defined, you can instantiate them by including `foo_param_test.h`,
invoking `INSTANTIATE_TEST_SUITE_P()`, and depending on the library target that
contains `foo_param_test.cc`. You can instantiate the same abstract test suite
multiple times, possibly in different source files.
### Specifying Names for Value-Parameterized Test Parameters
The optional last argument to `INSTANTIATE_TEST_SUITE_P()` allows the user to
specify a function or functor that generates custom test name suffixes based on
the test parameters. The function should accept one argument of type
`testing::TestParamInfo<class ParamType>`, and return `std::string`.
`testing::PrintToStringParamName` is a builtin test suffix generator that
returns the value of `testing::PrintToString(GetParam())`. It does not work for
`std::string` or C strings.
{: .callout .note}
NOTE: test names must be non-empty, unique, and may only contain ASCII
alphanumeric characters. In particular, they
[should not contain underscores](faq.md#why-should-test-suite-names-and-test-names-not-contain-underscore)
```c++
class MyTestSuite : public testing::TestWithParam<int> {};
TEST_P(MyTestSuite, MyTest)
{
std::cout << "Example Test Param: " << GetParam() << std::endl;
}
INSTANTIATE_TEST_SUITE_P(MyGroup, MyTestSuite, testing::Range(0, 10),
testing::PrintToStringParamName());
```
Providing a custom functor allows for more control over test parameter name
generation, especially for types where the automatic conversion does not
generate helpful parameter names (e.g. strings as demonstrated above). The
following example illustrates this for multiple parameters, an enumeration type
and a string, and also demonstrates how to combine generators. It uses a lambda
for conciseness:
```c++
enum class MyType { MY_FOO = 0, MY_BAR = 1 };
class MyTestSuite : public testing::TestWithParam<std::tuple<MyType, std::string>> {
};
INSTANTIATE_TEST_SUITE_P(
MyGroup, MyTestSuite,
testing::Combine(
testing::Values(MyType::MY_FOO, MyType::MY_BAR),
testing::Values("A", "B")),
[](const testing::TestParamInfo<MyTestSuite::ParamType>& info) {
std::string name = absl::StrCat(
std::get<0>(info.param) == MyType::MY_FOO ? "Foo" : "Bar",
std::get<1>(info.param));
absl::c_replace_if(name, [](char c) { return !std::isalnum(c); }, '_');
return name;
});
```
## Typed Tests
Suppose you have multiple implementations of the same interface and want to make
sure that all of them satisfy some common requirements. Or, you may have defined
several types that are supposed to conform to the same "concept" and you want to
verify it. In both cases, you want the same test logic repeated for different
types.
While you can write one `TEST` or `TEST_F` for each type you want to test (and
you may even factor the test logic into a function template that you invoke from
the `TEST`), it's tedious and doesn't scale: if you want `m` tests over `n`
types, you'll end up writing `m*n` `TEST`s.
*Typed tests* allow you to repeat the same test logic over a list of types. You
only need to write the test logic once, although you must know the type list
when writing typed tests. Here's how you do it:
First, define a fixture class template. It should be parameterized by a type.
Remember to derive it from `::testing::Test`:
```c++
template <typename T>
class FooTest : public testing::Test {
public:
...
using List = std::list<T>;
static T shared_;
T value_;
};
```
Next, associate a list of types with the test suite, which will be repeated for
each type in the list:
```c++
using MyTypes = ::testing::Types<char, int, unsigned int>;
TYPED_TEST_SUITE(FooTest, MyTypes);
```
The type alias (`using` or `typedef`) is necessary for the `TYPED_TEST_SUITE`
macro to parse correctly. Otherwise the compiler will think that each comma in
the type list introduces a new macro argument.
Then, use `TYPED_TEST()` instead of `TEST_F()` to define a typed test for this
test suite. You can repeat this as many times as you want:
```c++
TYPED_TEST(FooTest, DoesBlah) {
// Inside a test, refer to the special name TypeParam to get the type
// parameter. Since we are inside a derived class template, C++ requires
// us to visit the members of FooTest via 'this'.
TypeParam n = this->value_;
// To visit static members of the fixture, add the 'TestFixture::'
// prefix.
n += TestFixture::shared_;
// To refer to typedefs in the fixture, add the 'typename TestFixture::'
// prefix. The 'typename' is required to satisfy the compiler.
typename TestFixture::List values;
values.push_back(n);
...
}
TYPED_TEST(FooTest, HasPropertyA) { ... }
```
You can see [sample6_unittest.cc] for a complete example.
[sample6_unittest.cc]: https://github.com/google/googletest/blob/master/googletest/samples/sample6_unittest.cc "Typed Test example"
## Type-Parameterized Tests
*Type-parameterized tests* are like typed tests, except that they don't require
you to know the list of types ahead of time. Instead, you can define the test
logic first and instantiate it with different type lists later. You can even
instantiate it more than once in the same program.
If you are designing an interface or concept, you can define a suite of
type-parameterized tests to verify properties that any valid implementation of
the interface/concept should have. Then, the author of each implementation can
just instantiate the test suite with their type to verify that it conforms to
the requirements, without having to write similar tests repeatedly. Here's an
example:
First, define a fixture class template, as we did with typed tests:
```c++
template <typename T>
class FooTest : public testing::Test {
...
};
```
Next, declare that you will define a type-parameterized test suite:
```c++
TYPED_TEST_SUITE_P(FooTest);
```
Then, use `TYPED_TEST_P()` to define a type-parameterized test. You can repeat
this as many times as you want:
```c++
TYPED_TEST_P(FooTest, DoesBlah) {
// Inside a test, refer to TypeParam to get the type parameter.
TypeParam n = 0;
...
}
TYPED_TEST_P(FooTest, HasPropertyA) { ... }
```
Now the tricky part: you need to register all test patterns using the
`REGISTER_TYPED_TEST_SUITE_P` macro before you can instantiate them. The first
argument of the macro is the test suite name; the rest are the names of the
tests in this test suite:
```c++
REGISTER_TYPED_TEST_SUITE_P(FooTest,
DoesBlah, HasPropertyA);
```
Finally, you are free to instantiate the pattern with the types you want. If you
put the above code in a header file, you can `#include` it in multiple C++
source files and instantiate it multiple times.
```c++
using MyTypes = ::testing::Types<char, int, unsigned int>;
INSTANTIATE_TYPED_TEST_SUITE_P(My, FooTest, MyTypes);
```
To distinguish different instances of the pattern, the first argument to the
`INSTANTIATE_TYPED_TEST_SUITE_P` macro is a prefix that will be added to the
actual test suite name. Remember to pick unique prefixes for different
instances.
In the special case where the type list contains only one type, you can write
that type directly without `::testing::Types<...>`, like this:
```c++
INSTANTIATE_TYPED_TEST_SUITE_P(My, FooTest, int);
```
You can see [sample6_unittest.cc] for a complete example.
## Testing Private Code
If you change your software's internal implementation, your tests should not
break as long as the change is not observable by users. Therefore, **per the
black-box testing principle, most of the time you should test your code through
its public interfaces.**
**If you still find yourself needing to test internal implementation code,
consider if there's a better design.** The desire to test internal
implementation is often a sign that the class is doing too much. Consider
extracting an implementation class, and testing it. Then use that implementation
class in the original class.
If you absolutely have to test non-public interface code though, you can. There
are two cases to consider:
* Static functions ( *not* the same as static member functions!) or unnamed
namespaces, and
* Private or protected class members
To test them, we use the following special techniques:
* Both static functions and definitions/declarations in an unnamed namespace
are only visible within the same translation unit. To test them, you can
`#include` the entire `.cc` file being tested in your `*_test.cc` file.
(#including `.cc` files is not a good way to reuse code - you should not do
this in production code!)
However, a better approach is to move the private code into the
`foo::internal` namespace, where `foo` is the namespace your project
normally uses, and put the private declarations in a `*-internal.h` file.
Your production `.cc` files and your tests are allowed to include this
internal header, but your clients are not. This way, you can fully test your
internal implementation without leaking it to your clients.
* Private class members are only accessible from within the class or by
friends. To access a class' private members, you can declare your test
fixture as a friend to the class and define accessors in your fixture. Tests
using the fixture can then access the private members of your production
class via the accessors in the fixture. Note that even though your fixture
is a friend to your production class, your tests are not automatically
friends to it, as they are technically defined in sub-classes of the
fixture.
Another way to test private members is to refactor them into an
implementation class, which is then declared in a `*-internal.h` file. Your
clients aren't allowed to include this header but your tests can. Such is
called the
[Pimpl](https://www.gamedev.net/articles/programming/general-and-gameplay-programming/the-c-pimpl-r1794/)
(Private Implementation) idiom.
Or, you can declare an individual test as a friend of your class by adding
this line in the class body:
```c++
FRIEND_TEST(TestSuiteName, TestName);
```
For example,
```c++
// foo.h
class Foo {
...
private:
FRIEND_TEST(FooTest, BarReturnsZeroOnNull);
int Bar(void* x);
};
// foo_test.cc
...
TEST(FooTest, BarReturnsZeroOnNull) {
Foo foo;
EXPECT_EQ(foo.Bar(NULL), 0); // Uses Foo's private member Bar().
}
```
Pay special attention when your class is defined in a namespace. If you want
your test fixtures and tests to be friends of your class, then they must be
defined in the exact same namespace (no anonymous or inline namespaces).
For example, if the code to be tested looks like:
```c++
namespace my_namespace {
class Foo {
friend class FooTest;
FRIEND_TEST(FooTest, Bar);
FRIEND_TEST(FooTest, Baz);
... definition of the class Foo ...
};
} // namespace my_namespace
```
Your test code should be something like:
```c++
namespace my_namespace {
class FooTest : public testing::Test {
protected:
...
};
TEST_F(FooTest, Bar) { ... }
TEST_F(FooTest, Baz) { ... }
} // namespace my_namespace
```
## "Catching" Failures
If you are building a testing utility on top of googletest, you'll want to test
your utility. What framework would you use to test it? googletest, of course.
The challenge is to verify that your testing utility reports failures correctly.
In frameworks that report a failure by throwing an exception, you could catch
the exception and assert on it. But googletest doesn't use exceptions, so how do
we test that a piece of code generates an expected failure?
`"gtest/gtest-spi.h"` contains some constructs to do this. After #including this header,
you can use
```c++
EXPECT_FATAL_FAILURE(statement, substring);
```
to assert that `statement` generates a fatal (e.g. `ASSERT_*`) failure in the
current thread whose message contains the given `substring`, or use
```c++
EXPECT_NONFATAL_FAILURE(statement, substring);
```
if you are expecting a non-fatal (e.g. `EXPECT_*`) failure.
Only failures in the current thread are checked to determine the result of this
type of expectations. If `statement` creates new threads, failures in these
threads are also ignored. If you want to catch failures in other threads as
well, use one of the following macros instead:
```c++
EXPECT_FATAL_FAILURE_ON_ALL_THREADS(statement, substring);
EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(statement, substring);
```
{: .callout .note}
NOTE: Assertions from multiple threads are currently not supported on Windows.
For technical reasons, there are some caveats:
1. You cannot stream a failure message to either macro.
2. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot reference
local non-static variables or non-static members of `this` object.
3. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot return a
value.
## Registering tests programmatically
The `TEST` macros handle the vast majority of all use cases, but there are few
where runtime registration logic is required. For those cases, the framework
provides the `::testing::RegisterTest` that allows callers to register arbitrary
tests dynamically.
This is an advanced API only to be used when the `TEST` macros are insufficient.
The macros should be preferred when possible, as they avoid most of the
complexity of calling this function.
It provides the following signature:
```c++
template <typename Factory>
TestInfo* RegisterTest(const char* test_suite_name, const char* test_name,
const char* type_param, const char* value_param,
const char* file, int line, Factory factory);
```
The `factory` argument is a factory callable (move-constructible) object or
function pointer that creates a new instance of the Test object. It handles
ownership to the caller. The signature of the callable is `Fixture*()`, where
`Fixture` is the test fixture class for the test. All tests registered with the
same `test_suite_name` must return the same fixture type. This is checked at
runtime.
The framework will infer the fixture class from the factory and will call the
`SetUpTestSuite` and `TearDownTestSuite` for it.
Must be called before `RUN_ALL_TESTS()` is invoked, otherwise behavior is
undefined.
Use case example:
```c++
class MyFixture : public testing::Test {
public:
// All of these optional, just like in regular macro usage.
static void SetUpTestSuite() { ... }
static void TearDownTestSuite() { ... }
void SetUp() override { ... }
void TearDown() override { ... }
};
class MyTest : public MyFixture {
public:
explicit MyTest(int data) : data_(data) {}
void TestBody() override { ... }
private:
int data_;
};
void RegisterMyTests(const std::vector<int>& values) {
for (int v : values) {
testing::RegisterTest(
"MyFixture", ("Test" + std::to_string(v)).c_str(), nullptr,
std::to_string(v).c_str(),
__FILE__, __LINE__,
// Important to use the fixture type as the return type here.
[=]() -> MyFixture* { return new MyTest(v); });
}
}
...
int main(int argc, char** argv) {
std::vector<int> values_to_test = LoadValuesFromConfig();
RegisterMyTests(values_to_test);
...
return RUN_ALL_TESTS();
}
```
## Getting the Current Test's Name
Sometimes a function may need to know the name of the currently running test.
For example, you may be using the `SetUp()` method of your test fixture to set
the golden file name based on which test is running. The
[`TestInfo`](reference/testing.md#TestInfo) class has this information.
To obtain a `TestInfo` object for the currently running test, call
`current_test_info()` on the [`UnitTest`](reference/testing.md#UnitTest)
singleton object:
```c++
// Gets information about the currently running test.
// Do NOT delete the returned object - it's managed by the UnitTest class.
const testing::TestInfo* const test_info =
testing::UnitTest::GetInstance()->current_test_info();
printf("We are in test %s of test suite %s.\n",
test_info->name(),
test_info->test_suite_name());
```
`current_test_info()` returns a null pointer if no test is running. In
particular, you cannot find the test suite name in `SetUpTestSuite()`,
`TearDownTestSuite()` (where you know the test suite name implicitly), or
functions called from them.
## Extending googletest by Handling Test Events
googletest provides an **event listener API** to let you receive notifications
about the progress of a test program and test failures. The events you can
listen to include the start and end of the test program, a test suite, or a test
method, among others. You may use this API to augment or replace the standard
console output, replace the XML output, or provide a completely different form
of output, such as a GUI or a database. You can also use test events as
checkpoints to implement a resource leak checker, for example.
### Defining Event Listeners
To define a event listener, you subclass either
[`testing::TestEventListener`](reference/testing.md#TestEventListener) or
[`testing::EmptyTestEventListener`](reference/testing.md#EmptyTestEventListener)
The former is an (abstract) interface, where *each pure virtual method can be
overridden to handle a test event* (For example, when a test starts, the
`OnTestStart()` method will be called.). The latter provides an empty
implementation of all methods in the interface, such that a subclass only needs
to override the methods it cares about.
When an event is fired, its context is passed to the handler function as an
argument. The following argument types are used:
* UnitTest reflects the state of the entire test program,
* TestSuite has information about a test suite, which can contain one or more
tests,
* TestInfo contains the state of a test, and
* TestPartResult represents the result of a test assertion.
An event handler function can examine the argument it receives to find out
interesting information about the event and the test program's state.
Here's an example:
```c++
class MinimalistPrinter : public testing::EmptyTestEventListener {
// Called before a test starts.
void OnTestStart(const testing::TestInfo& test_info) override {
printf("*** Test %s.%s starting.\n",
test_info.test_suite_name(), test_info.name());
}
// Called after a failed assertion or a SUCCESS().
void OnTestPartResult(const testing::TestPartResult& test_part_result) override {
printf("%s in %s:%d\n%s\n",
test_part_result.failed() ? "*** Failure" : "Success",
test_part_result.file_name(),
test_part_result.line_number(),
test_part_result.summary());
}
// Called after a test ends.
void OnTestEnd(const testing::TestInfo& test_info) override {
printf("*** Test %s.%s ending.\n",
test_info.test_suite_name(), test_info.name());
}
};
```
### Using Event Listeners
To use the event listener you have defined, add an instance of it to the
googletest event listener list (represented by class
[`TestEventListeners`](reference/testing.md#TestEventListeners) - note the "s"
at the end of the name) in your `main()` function, before calling
`RUN_ALL_TESTS()`:
```c++
int main(int argc, char** argv) {
testing::InitGoogleTest(&argc, argv);
// Gets hold of the event listener list.
testing::TestEventListeners& listeners =
testing::UnitTest::GetInstance()->listeners();
// Adds a listener to the end. googletest takes the ownership.
listeners.Append(new MinimalistPrinter);
return RUN_ALL_TESTS();
}
```
There's only one problem: the default test result printer is still in effect, so
its output will mingle with the output from your minimalist printer. To suppress
the default printer, just release it from the event listener list and delete it.
You can do so by adding one line:
```c++
...
delete listeners.Release(listeners.default_result_printer());
listeners.Append(new MinimalistPrinter);
return RUN_ALL_TESTS();
```
Now, sit back and enjoy a completely different output from your tests. For more
details, see [sample9_unittest.cc].
[sample9_unittest.cc]: https://github.com/google/googletest/blob/master/googletest/samples/sample9_unittest.cc "Event listener example"
You may append more than one listener to the list. When an `On*Start()` or
`OnTestPartResult()` event is fired, the listeners will receive it in the order
they appear in the list (since new listeners are added to the end of the list,
the default text printer and the default XML generator will receive the event
first). An `On*End()` event will be received by the listeners in the *reverse*
order. This allows output by listeners added later to be framed by output from
listeners added earlier.
### Generating Failures in Listeners
You may use failure-raising macros (`EXPECT_*()`, `ASSERT_*()`, `FAIL()`, etc)
when processing an event. There are some restrictions:
1. You cannot generate any failure in `OnTestPartResult()` (otherwise it will
cause `OnTestPartResult()` to be called recursively).
2. A listener that handles `OnTestPartResult()` is not allowed to generate any
failure.
When you add listeners to the listener list, you should put listeners that
handle `OnTestPartResult()` *before* listeners that can generate failures. This
ensures that failures generated by the latter are attributed to the right test
by the former.
See [sample10_unittest.cc] for an example of a failure-raising listener.
[sample10_unittest.cc]: https://github.com/google/googletest/blob/master/googletest/samples/sample10_unittest.cc "Failure-raising listener example"
## Running Test Programs: Advanced Options
googletest test programs are ordinary executables. Once built, you can run them
directly and affect their behavior via the following environment variables
and/or command line flags. For the flags to work, your programs must call
`::testing::InitGoogleTest()` before calling `RUN_ALL_TESTS()`.
To see a list of supported flags and their usage, please run your test program
with the `--help` flag. You can also use `-h`, `-?`, or `/?` for short.
If an option is specified both by an environment variable and by a flag, the
latter takes precedence.
### Selecting Tests
#### Listing Test Names
Sometimes it is necessary to list the available tests in a program before
running them so that a filter may be applied if needed. Including the flag
`--gtest_list_tests` overrides all other flags and lists tests in the following
format:
```none
TestSuite1.
TestName1
TestName2
TestSuite2.
TestName
```
None of the tests listed are actually run if the flag is provided. There is no
corresponding environment variable for this flag.
#### Running a Subset of the Tests
By default, a googletest program runs all tests the user has defined. Sometimes,
you want to run only a subset of the tests (e.g. for debugging or quickly
verifying a change). If you set the `GTEST_FILTER` environment variable or the
`--gtest_filter` flag to a filter string, googletest will only run the tests
whose full names (in the form of `TestSuiteName.TestName`) match the filter.
The format of a filter is a '`:`'-separated list of wildcard patterns (called
the *positive patterns*) optionally followed by a '`-`' and another
'`:`'-separated pattern list (called the *negative patterns*). A test matches
the filter if and only if it matches any of the positive patterns but does not
match any of the negative patterns.
A pattern may contain `'*'` (matches any string) or `'?'` (matches any single
character). For convenience, the filter `'*-NegativePatterns'` can be also
written as `'-NegativePatterns'`.
For example:
* `./foo_test` Has no flag, and thus runs all its tests.
* `./foo_test --gtest_filter=*` Also runs everything, due to the single
match-everything `*` value.
* `./foo_test --gtest_filter=FooTest.*` Runs everything in test suite
`FooTest` .
* `./foo_test --gtest_filter=*Null*:*Constructor*` Runs any test whose full
name contains either `"Null"` or `"Constructor"` .
* `./foo_test --gtest_filter=-*DeathTest.*` Runs all non-death tests.
* `./foo_test --gtest_filter=FooTest.*-FooTest.Bar` Runs everything in test
suite `FooTest` except `FooTest.Bar`.
* `./foo_test --gtest_filter=FooTest.*:BarTest.*-FooTest.Bar:BarTest.Foo` Runs
everything in test suite `FooTest` except `FooTest.Bar` and everything in
test suite `BarTest` except `BarTest.Foo`.
#### Stop test execution upon first failure
By default, a googletest program runs all tests the user has defined. In some
cases (e.g. iterative test development & execution) it may be desirable stop
test execution upon first failure (trading improved latency for completeness).
If `GTEST_FAIL_FAST` environment variable or `--gtest_fail_fast` flag is set,
the test runner will stop execution as soon as the first test failure is
found.
#### Temporarily Disabling Tests
If you have a broken test that you cannot fix right away, you can add the
`DISABLED_` prefix to its name. This will exclude it from execution. This is
better than commenting out the code or using `#if 0`, as disabled tests are
still compiled (and thus won't rot).
If you need to disable all tests in a test suite, you can either add `DISABLED_`
to the front of the name of each test, or alternatively add it to the front of
the test suite name.
For example, the following tests won't be run by googletest, even though they
will still be compiled:
```c++
// Tests that Foo does Abc.
TEST(FooTest, DISABLED_DoesAbc) { ... }
class DISABLED_BarTest : public testing::Test { ... };
// Tests that Bar does Xyz.
TEST_F(DISABLED_BarTest, DoesXyz) { ... }
```
{: .callout .note}
NOTE: This feature should only be used for temporary pain-relief. You still have
to fix the disabled tests at a later date. As a reminder, googletest will print
a banner warning you if a test program contains any disabled tests.
{: .callout .tip}
TIP: You can easily count the number of disabled tests you have using
`grep`. This number can be used as a metric for
improving your test quality.
#### Temporarily Enabling Disabled Tests
To include disabled tests in test execution, just invoke the test program with
the `--gtest_also_run_disabled_tests` flag or set the
`GTEST_ALSO_RUN_DISABLED_TESTS` environment variable to a value other than `0`.
You can combine this with the `--gtest_filter` flag to further select which
disabled tests to run.
### Repeating the Tests
Once in a while you'll run into a test whose result is hit-or-miss. Perhaps it
will fail only 1% of the time, making it rather hard to reproduce the bug under
a debugger. This can be a major source of frustration.
The `--gtest_repeat` flag allows you to repeat all (or selected) test methods in
a program many times. Hopefully, a flaky test will eventually fail and give you
a chance to debug. Here's how to use it:
```none
$ foo_test --gtest_repeat=1000
Repeat foo_test 1000 times and don't stop at failures.
$ foo_test --gtest_repeat=-1
A negative count means repeating forever.
$ foo_test --gtest_repeat=1000 --gtest_break_on_failure
Repeat foo_test 1000 times, stopping at the first failure. This
is especially useful when running under a debugger: when the test
fails, it will drop into the debugger and you can then inspect
variables and stacks.
$ foo_test --gtest_repeat=1000 --gtest_filter=FooBar.*
Repeat the tests whose name matches the filter 1000 times.
```
If your test program contains
[global set-up/tear-down](#global-set-up-and-tear-down) code, it will be
repeated in each iteration as well, as the flakiness may be in it. You can also
specify the repeat count by setting the `GTEST_REPEAT` environment variable.
### Shuffling the Tests
You can specify the `--gtest_shuffle` flag (or set the `GTEST_SHUFFLE`
environment variable to `1`) to run the tests in a program in a random order.
This helps to reveal bad dependencies between tests.
By default, googletest uses a random seed calculated from the current time.
Therefore you'll get a different order every time. The console output includes
the random seed value, such that you can reproduce an order-related test failure
later. To specify the random seed explicitly, use the `--gtest_random_seed=SEED`
flag (or set the `GTEST_RANDOM_SEED` environment variable), where `SEED` is an
integer in the range [0, 99999]. The seed value 0 is special: it tells
googletest to do the default behavior of calculating the seed from the current
time.
If you combine this with `--gtest_repeat=N`, googletest will pick a different
random seed and re-shuffle the tests in each iteration.
### Controlling Test Output
#### Colored Terminal Output
googletest can use colors in its terminal output to make it easier to spot the
important information:
<pre>...
<font color="green">[----------]</font> 1 test from FooTest
<font color="green">[ RUN ]</font> FooTest.DoesAbc
<font color="green">[ OK ]</font> FooTest.DoesAbc
<font color="green">[----------]</font> 2 tests from BarTest
<font color="green">[ RUN ]</font> BarTest.HasXyzProperty
<font color="green">[ OK ]</font> BarTest.HasXyzProperty
<font color="green">[ RUN ]</font> BarTest.ReturnsTrueOnSuccess
... some error messages ...
<font color="red">[ FAILED ]</font> BarTest.ReturnsTrueOnSuccess
...
<font color="green">[==========]</font> 30 tests from 14 test suites ran.
<font color="green">[ PASSED ]</font> 28 tests.
<font color="red">[ FAILED ]</font> 2 tests, listed below:
<font color="red">[ FAILED ]</font> BarTest.ReturnsTrueOnSuccess
<font color="red">[ FAILED ]</font> AnotherTest.DoesXyz
2 FAILED TESTS
</pre>
You can set the `GTEST_COLOR` environment variable or the `--gtest_color`
command line flag to `yes`, `no`, or `auto` (the default) to enable colors,
disable colors, or let googletest decide. When the value is `auto`, googletest
will use colors if and only if the output goes to a terminal and (on non-Windows
platforms) the `TERM` environment variable is set to `xterm` or `xterm-color`.
#### Suppressing test passes
By default, googletest prints 1 line of output for each test, indicating if it
passed or failed. To show only test failures, run the test program with
`--gtest_brief=1`, or set the GTEST_BRIEF environment variable to `1`.
#### Suppressing the Elapsed Time
By default, googletest prints the time it takes to run each test. To disable
that, run the test program with the `--gtest_print_time=0` command line flag, or
set the GTEST_PRINT_TIME environment variable to `0`.
#### Suppressing UTF-8 Text Output
In case of assertion failures, googletest prints expected and actual values of
type `string` both as hex-encoded strings as well as in readable UTF-8 text if
they contain valid non-ASCII UTF-8 characters. If you want to suppress the UTF-8
text because, for example, you don't have an UTF-8 compatible output medium, run
the test program with `--gtest_print_utf8=0` or set the `GTEST_PRINT_UTF8`
environment variable to `0`.
#### Generating an XML Report
googletest can emit a detailed XML report to a file in addition to its normal
textual output. The report contains the duration of each test, and thus can help
you identify slow tests.
To generate the XML report, set the `GTEST_OUTPUT` environment variable or the
`--gtest_output` flag to the string `"xml:path_to_output_file"`, which will
create the file at the given location. You can also just use the string `"xml"`,
in which case the output can be found in the `test_detail.xml` file in the
current directory.
If you specify a directory (for example, `"xml:output/directory/"` on Linux or
`"xml:output\directory\"` on Windows), googletest will create the XML file in
that directory, named after the test executable (e.g. `foo_test.xml` for test
program `foo_test` or `foo_test.exe`). If the file already exists (perhaps left
over from a previous run), googletest will pick a different name (e.g.
`foo_test_1.xml`) to avoid overwriting it.
The report is based on the `junitreport` Ant task. Since that format was
originally intended for Java, a little interpretation is required to make it
apply to googletest tests, as shown here:
```xml
<testsuites name="AllTests" ...>
<testsuite name="test_case_name" ...>
<testcase name="test_name" ...>
<failure message="..."/>
<failure message="..."/>
<failure message="..."/>
</testcase>
</testsuite>
</testsuites>
```
* The root `<testsuites>` element corresponds to the entire test program.
* `<testsuite>` elements correspond to googletest test suites.
* `<testcase>` elements correspond to googletest test functions.
For instance, the following program
```c++
TEST(MathTest, Addition) { ... }
TEST(MathTest, Subtraction) { ... }
TEST(LogicTest, NonContradiction) { ... }
```
could generate this report:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<testsuites tests="3" failures="1" errors="0" time="0.035" timestamp="2011-10-31T18:52:42" name="AllTests">
<testsuite name="MathTest" tests="2" failures="1" errors="0" time="0.015">
<testcase name="Addition" status="run" time="0.007" classname="">
<failure message="Value of: add(1, 1)&#x0A; Actual: 3&#x0A;Expected: 2" type="">...</failure>
<failure message="Value of: add(1, -1)&#x0A; Actual: 1&#x0A;Expected: 0" type="">...</failure>
</testcase>
<testcase name="Subtraction" status="run" time="0.005" classname="">
</testcase>
</testsuite>
<testsuite name="LogicTest" tests="1" failures="0" errors="0" time="0.005">
<testcase name="NonContradiction" status="run" time="0.005" classname="">
</testcase>
</testsuite>
</testsuites>
```
Things to note:
* The `tests` attribute of a `<testsuites>` or `<testsuite>` element tells how
many test functions the googletest program or test suite contains, while the
`failures` attribute tells how many of them failed.
* The `time` attribute expresses the duration of the test, test suite, or
entire test program in seconds.
* The `timestamp` attribute records the local date and time of the test
execution.
* Each `<failure>` element corresponds to a single failed googletest
assertion.
#### Generating a JSON Report
googletest can also emit a JSON report as an alternative format to XML. To
generate the JSON report, set the `GTEST_OUTPUT` environment variable or the
`--gtest_output` flag to the string `"json:path_to_output_file"`, which will
create the file at the given location. You can also just use the string
`"json"`, in which case the output can be found in the `test_detail.json` file
in the current directory.
The report format conforms to the following JSON Schema:
```json
{
"$schema": "http://json-schema.org/schema#",
"type": "object",
"definitions": {
"TestCase": {
"type": "object",
"properties": {
"name": { "type": "string" },
"tests": { "type": "integer" },
"failures": { "type": "integer" },
"disabled": { "type": "integer" },
"time": { "type": "string" },
"testsuite": {
"type": "array",
"items": {
"$ref": "#/definitions/TestInfo"
}
}
}
},
"TestInfo": {
"type": "object",
"properties": {
"name": { "type": "string" },
"status": {
"type": "string",
"enum": ["RUN", "NOTRUN"]
},
"time": { "type": "string" },
"classname": { "type": "string" },
"failures": {
"type": "array",
"items": {
"$ref": "#/definitions/Failure"
}
}
}
},
"Failure": {
"type": "object",
"properties": {
"failures": { "type": "string" },
"type": { "type": "string" }
}
}
},
"properties": {
"tests": { "type": "integer" },
"failures": { "type": "integer" },
"disabled": { "type": "integer" },
"errors": { "type": "integer" },
"timestamp": {
"type": "string",
"format": "date-time"
},
"time": { "type": "string" },
"name": { "type": "string" },
"testsuites": {
"type": "array",
"items": {
"$ref": "#/definitions/TestCase"
}
}
}
}
```
The report uses the format that conforms to the following Proto3 using the
[JSON encoding](https://developers.google.com/protocol-buffers/docs/proto3#json):
```proto
syntax = "proto3";
package googletest;
import "google/protobuf/timestamp.proto";
import "google/protobuf/duration.proto";
message UnitTest {
int32 tests = 1;
int32 failures = 2;
int32 disabled = 3;
int32 errors = 4;
google.protobuf.Timestamp timestamp = 5;
google.protobuf.Duration time = 6;
string name = 7;
repeated TestCase testsuites = 8;
}
message TestCase {
string name = 1;
int32 tests = 2;
int32 failures = 3;
int32 disabled = 4;
int32 errors = 5;
google.protobuf.Duration time = 6;
repeated TestInfo testsuite = 7;
}
message TestInfo {
string name = 1;
enum Status {
RUN = 0;
NOTRUN = 1;
}
Status status = 2;
google.protobuf.Duration time = 3;
string classname = 4;
message Failure {
string failures = 1;
string type = 2;
}
repeated Failure failures = 5;
}
```
For instance, the following program
```c++
TEST(MathTest, Addition) { ... }
TEST(MathTest, Subtraction) { ... }
TEST(LogicTest, NonContradiction) { ... }
```
could generate this report:
```json
{
"tests": 3,
"failures": 1,
"errors": 0,
"time": "0.035s",
"timestamp": "2011-10-31T18:52:42Z",
"name": "AllTests",
"testsuites": [
{
"name": "MathTest",
"tests": 2,
"failures": 1,
"errors": 0,
"time": "0.015s",
"testsuite": [
{
"name": "Addition",
"status": "RUN",
"time": "0.007s",
"classname": "",
"failures": [
{
"message": "Value of: add(1, 1)\n Actual: 3\nExpected: 2",
"type": ""
},
{
"message": "Value of: add(1, -1)\n Actual: 1\nExpected: 0",
"type": ""
}
]
},
{
"name": "Subtraction",
"status": "RUN",
"time": "0.005s",
"classname": ""
}
]
},
{
"name": "LogicTest",
"tests": 1,
"failures": 0,
"errors": 0,
"time": "0.005s",
"testsuite": [
{
"name": "NonContradiction",
"status": "RUN",
"time": "0.005s",
"classname": ""
}
]
}
]
}
```
{: .callout .important}
IMPORTANT: The exact format of the JSON document is subject to change.
### Controlling How Failures Are Reported
#### Detecting Test Premature Exit
Google Test implements the _premature-exit-file_ protocol for test runners
to catch any kind of unexpected exits of test programs. Upon start,
Google Test creates the file which will be automatically deleted after
all work has been finished. Then, the test runner can check if this file
exists. In case the file remains undeleted, the inspected test has exited
prematurely.
This feature is enabled only if the `TEST_PREMATURE_EXIT_FILE` environment
variable has been set.
#### Turning Assertion Failures into Break-Points
When running test programs under a debugger, it's very convenient if the
debugger can catch an assertion failure and automatically drop into interactive
mode. googletest's *break-on-failure* mode supports this behavior.
To enable it, set the `GTEST_BREAK_ON_FAILURE` environment variable to a value
other than `0`. Alternatively, you can use the `--gtest_break_on_failure`
command line flag.
#### Disabling Catching Test-Thrown Exceptions
googletest can be used either with or without exceptions enabled. If a test
throws a C++ exception or (on Windows) a structured exception (SEH), by default
googletest catches it, reports it as a test failure, and continues with the next
test method. This maximizes the coverage of a test run. Also, on Windows an
uncaught exception will cause a pop-up window, so catching the exceptions allows
you to run the tests automatically.
When debugging the test failures, however, you may instead want the exceptions
to be handled by the debugger, such that you can examine the call stack when an
exception is thrown. To achieve that, set the `GTEST_CATCH_EXCEPTIONS`
environment variable to `0`, or use the `--gtest_catch_exceptions=0` flag when
running the tests.
### Sanitizer Integration
The
[Undefined Behavior Sanitizer](https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html),
[Address Sanitizer](https://github.com/google/sanitizers/wiki/AddressSanitizer),
and
[Thread Sanitizer](https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual)
all provide weak functions that you can override to trigger explicit failures
when they detect sanitizer errors, such as creating a reference from `nullptr`.
To override these functions, place definitions for them in a source file that
you compile as part of your main binary:
```
extern "C" {
void __ubsan_on_report() {
FAIL() << "Encountered an undefined behavior sanitizer error";
}
void __asan_on_error() {
FAIL() << "Encountered an address sanitizer error";
}
void __tsan_on_report() {
FAIL() << "Encountered a thread sanitizer error";
}
} // extern "C"
```
After compiling your project with one of the sanitizers enabled, if a particular
test triggers a sanitizer error, googletest will report that it failed.
---
---
@import "jekyll-theme-primer";
@import "main";
# Community-Created Documentation
The following is a list, in no particular order, of links to documentation
created by the Googletest community.
* [Googlemock Insights](https://github.com/ElectricRCAircraftGuy/eRCaGuy_dotfiles/blob/master/googletest/insights.md),
by [ElectricRCAircraftGuy](https://github.com/ElectricRCAircraftGuy)
# Googletest FAQ
## Why should test suite names and test names not contain underscore?
{: .callout .note}
Note: Googletest reserves underscore (`_`) for special purpose keywords, such as
[the `DISABLED_` prefix](advanced.md#temporarily-disabling-tests), in addition
to the following rationale.
Underscore (`_`) is special, as C++ reserves the following to be used by the
compiler and the standard library:
1. any identifier that starts with an `_` followed by an upper-case letter, and
2. any identifier that contains two consecutive underscores (i.e. `__`)
*anywhere* in its name.
User code is *prohibited* from using such identifiers.
Now let's look at what this means for `TEST` and `TEST_F`.
Currently `TEST(TestSuiteName, TestName)` generates a class named
`TestSuiteName_TestName_Test`. What happens if `TestSuiteName` or `TestName`
contains `_`?
1. If `TestSuiteName` starts with an `_` followed by an upper-case letter (say,
`_Foo`), we end up with `_Foo_TestName_Test`, which is reserved and thus
invalid.
2. If `TestSuiteName` ends with an `_` (say, `Foo_`), we get
`Foo__TestName_Test`, which is invalid.
3. If `TestName` starts with an `_` (say, `_Bar`), we get
`TestSuiteName__Bar_Test`, which is invalid.
4. If `TestName` ends with an `_` (say, `Bar_`), we get
`TestSuiteName_Bar__Test`, which is invalid.
So clearly `TestSuiteName` and `TestName` cannot start or end with `_`
(Actually, `TestSuiteName` can start with `_` -- as long as the `_` isn't
followed by an upper-case letter. But that's getting complicated. So for
simplicity we just say that it cannot start with `_`.).
It may seem fine for `TestSuiteName` and `TestName` to contain `_` in the
middle. However, consider this:
```c++
TEST(Time, Flies_Like_An_Arrow) { ... }
TEST(Time_Flies, Like_An_Arrow) { ... }
```
Now, the two `TEST`s will both generate the same class
(`Time_Flies_Like_An_Arrow_Test`). That's not good.
So for simplicity, we just ask the users to avoid `_` in `TestSuiteName` and
`TestName`. The rule is more constraining than necessary, but it's simple and
easy to remember. It also gives googletest some wiggle room in case its
implementation needs to change in the future.
If you violate the rule, there may not be immediate consequences, but your test
may (just may) break with a new compiler (or a new version of the compiler you
are using) or with a new version of googletest. Therefore it's best to follow
the rule.
## Why does googletest support `EXPECT_EQ(NULL, ptr)` and `ASSERT_EQ(NULL, ptr)` but not `EXPECT_NE(NULL, ptr)` and `ASSERT_NE(NULL, ptr)`?
First of all, you can use `nullptr` with each of these macros, e.g.
`EXPECT_EQ(ptr, nullptr)`, `EXPECT_NE(ptr, nullptr)`, `ASSERT_EQ(ptr, nullptr)`,
`ASSERT_NE(ptr, nullptr)`. This is the preferred syntax in the style guide
because `nullptr` does not have the type problems that `NULL` does.
Due to some peculiarity of C++, it requires some non-trivial template meta
programming tricks to support using `NULL` as an argument of the `EXPECT_XX()`
and `ASSERT_XX()` macros. Therefore we only do it where it's most needed
(otherwise we make the implementation of googletest harder to maintain and more
error-prone than necessary).
Historically, the `EXPECT_EQ()` macro took the *expected* value as its first
argument and the *actual* value as the second, though this argument order is now
discouraged. It was reasonable that someone wanted
to write `EXPECT_EQ(NULL, some_expression)`, and this indeed was requested
several times. Therefore we implemented it.
The need for `EXPECT_NE(NULL, ptr)` wasn't nearly as strong. When the assertion
fails, you already know that `ptr` must be `NULL`, so it doesn't add any
information to print `ptr` in this case. That means `EXPECT_TRUE(ptr != NULL)`
works just as well.
If we were to support `EXPECT_NE(NULL, ptr)`, for consistency we'd have to
support `EXPECT_NE(ptr, NULL)` as well. This means using the template meta
programming tricks twice in the implementation, making it even harder to
understand and maintain. We believe the benefit doesn't justify the cost.
Finally, with the growth of the gMock matcher library, we are encouraging people
to use the unified `EXPECT_THAT(value, matcher)` syntax more often in tests. One
significant advantage of the matcher approach is that matchers can be easily
combined to form new matchers, while the `EXPECT_NE`, etc, macros cannot be
easily combined. Therefore we want to invest more in the matchers than in the
`EXPECT_XX()` macros.
## I need to test that different implementations of an interface satisfy some common requirements. Should I use typed tests or value-parameterized tests?
For testing various implementations of the same interface, either typed tests or
value-parameterized tests can get it done. It's really up to you the user to
decide which is more convenient for you, depending on your particular case. Some
rough guidelines:
* Typed tests can be easier to write if instances of the different
implementations can be created the same way, modulo the type. For example,
if all these implementations have a public default constructor (such that
you can write `new TypeParam`), or if their factory functions have the same
form (e.g. `CreateInstance<TypeParam>()`).
* Value-parameterized tests can be easier to write if you need different code
patterns to create different implementations' instances, e.g. `new Foo` vs
`new Bar(5)`. To accommodate for the differences, you can write factory
function wrappers and pass these function pointers to the tests as their
parameters.
* When a typed test fails, the default output includes the name of the type,
which can help you quickly identify which implementation is wrong.
Value-parameterized tests only show the number of the failed iteration by
default. You will need to define a function that returns the iteration name
and pass it as the third parameter to INSTANTIATE_TEST_SUITE_P to have more
useful output.
* When using typed tests, you need to make sure you are testing against the
interface type, not the concrete types (in other words, you want to make
sure `implicit_cast<MyInterface*>(my_concrete_impl)` works, not just that
`my_concrete_impl` works). It's less likely to make mistakes in this area
when using value-parameterized tests.
I hope I didn't confuse you more. :-) If you don't mind, I'd suggest you to give
both approaches a try. Practice is a much better way to grasp the subtle
differences between the two tools. Once you have some concrete experience, you
can much more easily decide which one to use the next time.
## I got some run-time errors about invalid proto descriptors when using `ProtocolMessageEquals`. Help!
{: .callout .note}
**Note:** `ProtocolMessageEquals` and `ProtocolMessageEquiv` are *deprecated*
now. Please use `EqualsProto`, etc instead.
`ProtocolMessageEquals` and `ProtocolMessageEquiv` were redefined recently and
are now less tolerant of invalid protocol buffer definitions. In particular, if
you have a `foo.proto` that doesn't fully qualify the type of a protocol message
it references (e.g. `message<Bar>` where it should be `message<blah.Bar>`), you
will now get run-time errors like:
```
... descriptor.cc:...] Invalid proto descriptor for file "path/to/foo.proto":
... descriptor.cc:...] blah.MyMessage.my_field: ".Bar" is not defined.
```
If you see this, your `.proto` file is broken and needs to be fixed by making
the types fully qualified. The new definition of `ProtocolMessageEquals` and
`ProtocolMessageEquiv` just happen to reveal your bug.
## My death test modifies some state, but the change seems lost after the death test finishes. Why?
Death tests (`EXPECT_DEATH`, etc) are executed in a sub-process s.t. the
expected crash won't kill the test program (i.e. the parent process). As a
result, any in-memory side effects they incur are observable in their respective
sub-processes, but not in the parent process. You can think of them as running
in a parallel universe, more or less.
In particular, if you use mocking and the death test statement invokes some mock
methods, the parent process will think the calls have never occurred. Therefore,
you may want to move your `EXPECT_CALL` statements inside the `EXPECT_DEATH`
macro.
## EXPECT_EQ(htonl(blah), blah_blah) generates weird compiler errors in opt mode. Is this a googletest bug?
Actually, the bug is in `htonl()`.
According to `'man htonl'`, `htonl()` is a *function*, which means it's valid to
use `htonl` as a function pointer. However, in opt mode `htonl()` is defined as
a *macro*, which breaks this usage.
Worse, the macro definition of `htonl()` uses a `gcc` extension and is *not*
standard C++. That hacky implementation has some ad hoc limitations. In
particular, it prevents you from writing `Foo<sizeof(htonl(x))>()`, where `Foo`
is a template that has an integral argument.
The implementation of `EXPECT_EQ(a, b)` uses `sizeof(... a ...)` inside a
template argument, and thus doesn't compile in opt mode when `a` contains a call
to `htonl()`. It is difficult to make `EXPECT_EQ` bypass the `htonl()` bug, as
the solution must work with different compilers on various platforms.
## The compiler complains about "undefined references" to some static const member variables, but I did define them in the class body. What's wrong?
If your class has a static data member:
```c++
// foo.h
class Foo {
...
static const int kBar = 100;
};
```
You also need to define it *outside* of the class body in `foo.cc`:
```c++
const int Foo::kBar; // No initializer here.
```
Otherwise your code is **invalid C++**, and may break in unexpected ways. In
particular, using it in googletest comparison assertions (`EXPECT_EQ`, etc) will
generate an "undefined reference" linker error. The fact that "it used to work"
doesn't mean it's valid. It just means that you were lucky. :-)
If the declaration of the static data member is `constexpr` then it is
implicitly an `inline` definition, and a separate definition in `foo.cc` is not
needed:
```c++
// foo.h
class Foo {
...
static constexpr int kBar = 100; // Defines kBar, no need to do it in foo.cc.
};
```
## Can I derive a test fixture from another?
Yes.
Each test fixture has a corresponding and same named test suite. This means only
one test suite can use a particular fixture. Sometimes, however, multiple test
cases may want to use the same or slightly different fixtures. For example, you
may want to make sure that all of a GUI library's test suites don't leak
important system resources like fonts and brushes.
In googletest, you share a fixture among test suites by putting the shared logic
in a base test fixture, then deriving from that base a separate fixture for each
test suite that wants to use this common logic. You then use `TEST_F()` to write
tests using each derived fixture.
Typically, your code looks like this:
```c++
// Defines a base test fixture.
class BaseTest : public ::testing::Test {
protected:
...
};
// Derives a fixture FooTest from BaseTest.
class FooTest : public BaseTest {
protected:
void SetUp() override {
BaseTest::SetUp(); // Sets up the base fixture first.
... additional set-up work ...
}
void TearDown() override {
... clean-up work for FooTest ...
BaseTest::TearDown(); // Remember to tear down the base fixture
// after cleaning up FooTest!
}
... functions and variables for FooTest ...
};
// Tests that use the fixture FooTest.
TEST_F(FooTest, Bar) { ... }
TEST_F(FooTest, Baz) { ... }
... additional fixtures derived from BaseTest ...
```
If necessary, you can continue to derive test fixtures from a derived fixture.
googletest has no limit on how deep the hierarchy can be.
For a complete example using derived test fixtures, see
[sample5_unittest.cc](https://github.com/google/googletest/blob/master/googletest/samples/sample5_unittest.cc).
## My compiler complains "void value not ignored as it ought to be." What does this mean?
You're probably using an `ASSERT_*()` in a function that doesn't return `void`.
`ASSERT_*()` can only be used in `void` functions, due to exceptions being
disabled by our build system. Please see more details
[here](advanced.md#assertion-placement).
## My death test hangs (or seg-faults). How do I fix it?
In googletest, death tests are run in a child process and the way they work is
delicate. To write death tests you really need to understand how they work—see
the details at [Death Assertions](reference/assertions.md#death) in the
Assertions Reference.
In particular, death tests don't like having multiple threads in the parent
process. So the first thing you can try is to eliminate creating threads outside
of `EXPECT_DEATH()`. For example, you may want to use mocks or fake objects
instead of real ones in your tests.
Sometimes this is impossible as some library you must use may be creating
threads before `main()` is even reached. In this case, you can try to minimize
the chance of conflicts by either moving as many activities as possible inside
`EXPECT_DEATH()` (in the extreme case, you want to move everything inside), or
leaving as few things as possible in it. Also, you can try to set the death test
style to `"threadsafe"`, which is safer but slower, and see if it helps.
If you go with thread-safe death tests, remember that they rerun the test
program from the beginning in the child process. Therefore make sure your
program can run side-by-side with itself and is deterministic.
In the end, this boils down to good concurrent programming. You have to make
sure that there are no race conditions or deadlocks in your program. No silver
bullet - sorry!
## Should I use the constructor/destructor of the test fixture or SetUp()/TearDown()? {#CtorVsSetUp}
The first thing to remember is that googletest does **not** reuse the same test
fixture object across multiple tests. For each `TEST_F`, googletest will create
a **fresh** test fixture object, immediately call `SetUp()`, run the test body,
call `TearDown()`, and then delete the test fixture object.
When you need to write per-test set-up and tear-down logic, you have the choice
between using the test fixture constructor/destructor or `SetUp()/TearDown()`.
The former is usually preferred, as it has the following benefits:
* By initializing a member variable in the constructor, we have the option to
make it `const`, which helps prevent accidental changes to its value and
makes the tests more obviously correct.
* In case we need to subclass the test fixture class, the subclass'
constructor is guaranteed to call the base class' constructor *first*, and
the subclass' destructor is guaranteed to call the base class' destructor
*afterward*. With `SetUp()/TearDown()`, a subclass may make the mistake of
forgetting to call the base class' `SetUp()/TearDown()` or call them at the
wrong time.
You may still want to use `SetUp()/TearDown()` in the following cases:
* C++ does not allow virtual function calls in constructors and destructors.
You can call a method declared as virtual, but it will not use dynamic
dispatch, it will use the definition from the class the constructor of which
is currently executing. This is because calling a virtual method before the
derived class constructor has a chance to run is very dangerous - the
virtual method might operate on uninitialized data. Therefore, if you need
to call a method that will be overridden in a derived class, you have to use
`SetUp()/TearDown()`.
* In the body of a constructor (or destructor), it's not possible to use the
`ASSERT_xx` macros. Therefore, if the set-up operation could cause a fatal
test failure that should prevent the test from running, it's necessary to
use `abort` and abort the whole test
executable, or to use `SetUp()` instead of a constructor.
* If the tear-down operation could throw an exception, you must use
`TearDown()` as opposed to the destructor, as throwing in a destructor leads
to undefined behavior and usually will kill your program right away. Note
that many standard libraries (like STL) may throw when exceptions are
enabled in the compiler. Therefore you should prefer `TearDown()` if you
want to write portable tests that work with or without exceptions.
* The googletest team is considering making the assertion macros throw on
platforms where exceptions are enabled (e.g. Windows, Mac OS, and Linux
client-side), which will eliminate the need for the user to propagate
failures from a subroutine to its caller. Therefore, you shouldn't use
googletest assertions in a destructor if your code could run on such a
platform.
## The compiler complains "no matching function to call" when I use ASSERT_PRED*. How do I fix it?
See details for [`EXPECT_PRED*`](reference/assertions.md#EXPECT_PRED) in the
Assertions Reference.
## My compiler complains about "ignoring return value" when I call RUN_ALL_TESTS(). Why?
Some people had been ignoring the return value of `RUN_ALL_TESTS()`. That is,
instead of
```c++
return RUN_ALL_TESTS();
```
they write
```c++
RUN_ALL_TESTS();
```
This is **wrong and dangerous**. The testing services needs to see the return
value of `RUN_ALL_TESTS()` in order to determine if a test has passed. If your
`main()` function ignores it, your test will be considered successful even if it
has a googletest assertion failure. Very bad.
We have decided to fix this (thanks to Michael Chastain for the idea). Now, your
code will no longer be able to ignore `RUN_ALL_TESTS()` when compiled with
`gcc`. If you do so, you'll get a compiler error.
If you see the compiler complaining about you ignoring the return value of
`RUN_ALL_TESTS()`, the fix is simple: just make sure its value is used as the
return value of `main()`.
But how could we introduce a change that breaks existing tests? Well, in this
case, the code was already broken in the first place, so we didn't break it. :-)
## My compiler complains that a constructor (or destructor) cannot return a value. What's going on?
Due to a peculiarity of C++, in order to support the syntax for streaming
messages to an `ASSERT_*`, e.g.
```c++
ASSERT_EQ(1, Foo()) << "blah blah" << foo;
```
we had to give up using `ASSERT*` and `FAIL*` (but not `EXPECT*` and
`ADD_FAILURE*`) in constructors and destructors. The workaround is to move the
content of your constructor/destructor to a private void member function, or
switch to `EXPECT_*()` if that works. This
[section](advanced.md#assertion-placement) in the user's guide explains it.
## My SetUp() function is not called. Why?
C++ is case-sensitive. Did you spell it as `Setup()`?
Similarly, sometimes people spell `SetUpTestSuite()` as `SetupTestSuite()` and
wonder why it's never called.
## I have several test suites which share the same test fixture logic, do I have to define a new test fixture class for each of them? This seems pretty tedious.
You don't have to. Instead of
```c++
class FooTest : public BaseTest {};
TEST_F(FooTest, Abc) { ... }
TEST_F(FooTest, Def) { ... }
class BarTest : public BaseTest {};
TEST_F(BarTest, Abc) { ... }
TEST_F(BarTest, Def) { ... }
```
you can simply `typedef` the test fixtures:
```c++
typedef BaseTest FooTest;
TEST_F(FooTest, Abc) { ... }
TEST_F(FooTest, Def) { ... }
typedef BaseTest BarTest;
TEST_F(BarTest, Abc) { ... }
TEST_F(BarTest, Def) { ... }
```
## googletest output is buried in a whole bunch of LOG messages. What do I do?
The googletest output is meant to be a concise and human-friendly report. If
your test generates textual output itself, it will mix with the googletest
output, making it hard to read. However, there is an easy solution to this
problem.
Since `LOG` messages go to stderr, we decided to let googletest output go to
stdout. This way, you can easily separate the two using redirection. For
example:
```shell
$ ./my_test > gtest_output.txt
```
## Why should I prefer test fixtures over global variables?
There are several good reasons:
1. It's likely your test needs to change the states of its global variables.
This makes it difficult to keep side effects from escaping one test and
contaminating others, making debugging difficult. By using fixtures, each
test has a fresh set of variables that's different (but with the same
names). Thus, tests are kept independent of each other.
2. Global variables pollute the global namespace.
3. Test fixtures can be reused via subclassing, which cannot be done easily
with global variables. This is useful if many test suites have something in
common.
## What can the statement argument in ASSERT_DEATH() be?
`ASSERT_DEATH(statement, matcher)` (or any death assertion macro) can be used
wherever *`statement`* is valid. So basically *`statement`* can be any C++
statement that makes sense in the current context. In particular, it can
reference global and/or local variables, and can be:
* a simple function call (often the case),
* a complex expression, or
* a compound statement.
Some examples are shown here:
```c++
// A death test can be a simple function call.
TEST(MyDeathTest, FunctionCall) {
ASSERT_DEATH(Xyz(5), "Xyz failed");
}
// Or a complex expression that references variables and functions.
TEST(MyDeathTest, ComplexExpression) {
const bool c = Condition();
ASSERT_DEATH((c ? Func1(0) : object2.Method("test")),
"(Func1|Method) failed");
}
// Death assertions can be used anywhere in a function. In
// particular, they can be inside a loop.
TEST(MyDeathTest, InsideLoop) {
// Verifies that Foo(0), Foo(1), ..., and Foo(4) all die.
for (int i = 0; i < 5; i++) {
EXPECT_DEATH_M(Foo(i), "Foo has \\d+ errors",
::testing::Message() << "where i is " << i);
}
}
// A death assertion can contain a compound statement.
TEST(MyDeathTest, CompoundStatement) {
// Verifies that at lease one of Bar(0), Bar(1), ..., and
// Bar(4) dies.
ASSERT_DEATH({
for (int i = 0; i < 5; i++) {
Bar(i);
}
},
"Bar has \\d+ errors");
}
```
## I have a fixture class `FooTest`, but `TEST_F(FooTest, Bar)` gives me error ``"no matching function for call to `FooTest::FooTest()'"``. Why?
Googletest needs to be able to create objects of your test fixture class, so it
must have a default constructor. Normally the compiler will define one for you.
However, there are cases where you have to define your own:
* If you explicitly declare a non-default constructor for class `FooTest`
(`DISALLOW_EVIL_CONSTRUCTORS()` does this), then you need to define a
default constructor, even if it would be empty.
* If `FooTest` has a const non-static data member, then you have to define the
default constructor *and* initialize the const member in the initializer
list of the constructor. (Early versions of `gcc` doesn't force you to
initialize the const member. It's a bug that has been fixed in `gcc 4`.)
## Why does ASSERT_DEATH complain about previous threads that were already joined?
With the Linux pthread library, there is no turning back once you cross the line
from a single thread to multiple threads. The first time you create a thread, a
manager thread is created in addition, so you get 3, not 2, threads. Later when
the thread you create joins the main thread, the thread count decrements by 1,
but the manager thread will never be killed, so you still have 2 threads, which
means you cannot safely run a death test.
The new NPTL thread library doesn't suffer from this problem, as it doesn't
create a manager thread. However, if you don't control which machine your test
runs on, you shouldn't depend on this.
## Why does googletest require the entire test suite, instead of individual tests, to be named *DeathTest when it uses ASSERT_DEATH?
googletest does not interleave tests from different test suites. That is, it
runs all tests in one test suite first, and then runs all tests in the next test
suite, and so on. googletest does this because it needs to set up a test suite
before the first test in it is run, and tear it down afterwards. Splitting up
the test case would require multiple set-up and tear-down processes, which is
inefficient and makes the semantics unclean.
If we were to determine the order of tests based on test name instead of test
case name, then we would have a problem with the following situation:
```c++
TEST_F(FooTest, AbcDeathTest) { ... }
TEST_F(FooTest, Uvw) { ... }
TEST_F(BarTest, DefDeathTest) { ... }
TEST_F(BarTest, Xyz) { ... }
```
Since `FooTest.AbcDeathTest` needs to run before `BarTest.Xyz`, and we don't
interleave tests from different test suites, we need to run all tests in the
`FooTest` case before running any test in the `BarTest` case. This contradicts
with the requirement to run `BarTest.DefDeathTest` before `FooTest.Uvw`.
## But I don't like calling my entire test suite \*DeathTest when it contains both death tests and non-death tests. What do I do?
You don't have to, but if you like, you may split up the test suite into
`FooTest` and `FooDeathTest`, where the names make it clear that they are
related:
```c++
class FooTest : public ::testing::Test { ... };
TEST_F(FooTest, Abc) { ... }
TEST_F(FooTest, Def) { ... }
using FooDeathTest = FooTest;
TEST_F(FooDeathTest, Uvw) { ... EXPECT_DEATH(...) ... }
TEST_F(FooDeathTest, Xyz) { ... ASSERT_DEATH(...) ... }
```
## googletest prints the LOG messages in a death test's child process only when the test fails. How can I see the LOG messages when the death test succeeds?
Printing the LOG messages generated by the statement inside `EXPECT_DEATH()`
makes it harder to search for real problems in the parent's log. Therefore,
googletest only prints them when the death test has failed.
If you really need to see such LOG messages, a workaround is to temporarily
break the death test (e.g. by changing the regex pattern it is expected to
match). Admittedly, this is a hack. We'll consider a more permanent solution
after the fork-and-exec-style death tests are implemented.
## The compiler complains about `no match for 'operator<<'` when I use an assertion. What gives?
If you use a user-defined type `FooType` in an assertion, you must make sure
there is an `std::ostream& operator<<(std::ostream&, const FooType&)` function
defined such that we can print a value of `FooType`.
In addition, if `FooType` is declared in a name space, the `<<` operator also
needs to be defined in the *same* name space. See
[Tip of the Week #49](http://abseil.io/tips/49) for details.
## How do I suppress the memory leak messages on Windows?
Since the statically initialized googletest singleton requires allocations on
the heap, the Visual C++ memory leak detector will report memory leaks at the
end of the program run. The easiest way to avoid this is to use the
`_CrtMemCheckpoint` and `_CrtMemDumpAllObjectsSince` calls to not report any
statically initialized heap objects. See MSDN for more details and additional
heap check/debug routines.
## How can my code detect if it is running in a test?
If you write code that sniffs whether it's running in a test and does different
things accordingly, you are leaking test-only logic into production code and
there is no easy way to ensure that the test-only code paths aren't run by
mistake in production. Such cleverness also leads to
[Heisenbugs](https://en.wikipedia.org/wiki/Heisenbug). Therefore we strongly
advise against the practice, and googletest doesn't provide a way to do it.
In general, the recommended way to cause the code to behave differently under
test is [Dependency Injection](http://en.wikipedia.org/wiki/Dependency_injection). You can inject
different functionality from the test and from the production code. Since your
production code doesn't link in the for-test logic at all (the
[`testonly`](http://docs.bazel.build/versions/master/be/common-definitions.html#common.testonly) attribute for BUILD targets helps to ensure
that), there is no danger in accidentally running it.
However, if you *really*, *really*, *really* have no choice, and if you follow
the rule of ending your test program names with `_test`, you can use the
*horrible* hack of sniffing your executable name (`argv[0]` in `main()`) to know
whether the code is under test.
## How do I temporarily disable a test?
If you have a broken test that you cannot fix right away, you can add the
`DISABLED_` prefix to its name. This will exclude it from execution. This is
better than commenting out the code or using `#if 0`, as disabled tests are
still compiled (and thus won't rot).
To include disabled tests in test execution, just invoke the test program with
the `--gtest_also_run_disabled_tests` flag.
## Is it OK if I have two separate `TEST(Foo, Bar)` test methods defined in different namespaces?
Yes.
The rule is **all test methods in the same test suite must use the same fixture
class.** This means that the following is **allowed** because both tests use the
same fixture class (`::testing::Test`).
```c++
namespace foo {
TEST(CoolTest, DoSomething) {
SUCCEED();
}
} // namespace foo
namespace bar {
TEST(CoolTest, DoSomething) {
SUCCEED();
}
} // namespace bar
```
However, the following code is **not allowed** and will produce a runtime error
from googletest because the test methods are using different test fixture
classes with the same test suite name.
```c++
namespace foo {
class CoolTest : public ::testing::Test {}; // Fixture foo::CoolTest
TEST_F(CoolTest, DoSomething) {
SUCCEED();
}
} // namespace foo
namespace bar {
class CoolTest : public ::testing::Test {}; // Fixture: bar::CoolTest
TEST_F(CoolTest, DoSomething) {
SUCCEED();
}
} // namespace bar
```
# gMock Cheat Sheet
## Defining a Mock Class
### Mocking a Normal Class {#MockClass}
Given
```cpp
class Foo {
...
virtual ~Foo();
virtual int GetSize() const = 0;
virtual string Describe(const char* name) = 0;
virtual string Describe(int type) = 0;
virtual bool Process(Bar elem, int count) = 0;
};
```
(note that `~Foo()` **must** be virtual) we can define its mock as
```cpp
#include "gmock/gmock.h"
class MockFoo : public Foo {
...
MOCK_METHOD(int, GetSize, (), (const, override));
MOCK_METHOD(string, Describe, (const char* name), (override));
MOCK_METHOD(string, Describe, (int type), (override));
MOCK_METHOD(bool, Process, (Bar elem, int count), (override));
};
```
To create a "nice" mock, which ignores all uninteresting calls, a "naggy" mock,
which warns on all uninteresting calls, or a "strict" mock, which treats them as
failures:
```cpp
using ::testing::NiceMock;
using ::testing::NaggyMock;
using ::testing::StrictMock;
NiceMock<MockFoo> nice_foo; // The type is a subclass of MockFoo.
NaggyMock<MockFoo> naggy_foo; // The type is a subclass of MockFoo.
StrictMock<MockFoo> strict_foo; // The type is a subclass of MockFoo.
```
{: .callout .note}
**Note:** A mock object is currently naggy by default. We may make it nice by
default in the future.
### Mocking a Class Template {#MockTemplate}
Class templates can be mocked just like any class.
To mock
```cpp
template <typename Elem>
class StackInterface {
...
virtual ~StackInterface();
virtual int GetSize() const = 0;
virtual void Push(const Elem& x) = 0;
};
```
(note that all member functions that are mocked, including `~StackInterface()`
**must** be virtual).
```cpp
template <typename Elem>
class MockStack : public StackInterface<Elem> {
...
MOCK_METHOD(int, GetSize, (), (const, override));
MOCK_METHOD(void, Push, (const Elem& x), (override));
};
```
### Specifying Calling Conventions for Mock Functions
If your mock function doesn't use the default calling convention, you can
specify it by adding `Calltype(convention)` to `MOCK_METHOD`'s 4th parameter.
For example,
```cpp
MOCK_METHOD(bool, Foo, (int n), (Calltype(STDMETHODCALLTYPE)));
MOCK_METHOD(int, Bar, (double x, double y),
(const, Calltype(STDMETHODCALLTYPE)));
```
where `STDMETHODCALLTYPE` is defined by `<objbase.h>` on Windows.
## Using Mocks in Tests {#UsingMocks}
The typical work flow is:
1. Import the gMock names you need to use. All gMock symbols are in the
`testing` namespace unless they are macros or otherwise noted.
2. Create the mock objects.
3. Optionally, set the default actions of the mock objects.
4. Set your expectations on the mock objects (How will they be called? What
will they do?).
5. Exercise code that uses the mock objects; if necessary, check the result
using googletest assertions.
6. When a mock object is destructed, gMock automatically verifies that all
expectations on it have been satisfied.
Here's an example:
```cpp
using ::testing::Return; // #1
TEST(BarTest, DoesThis) {
MockFoo foo; // #2
ON_CALL(foo, GetSize()) // #3
.WillByDefault(Return(1));
// ... other default actions ...
EXPECT_CALL(foo, Describe(5)) // #4
.Times(3)
.WillRepeatedly(Return("Category 5"));
// ... other expectations ...
EXPECT_EQ(MyProductionFunction(&foo), "good"); // #5
} // #6
```
## Setting Default Actions {#OnCall}
gMock has a **built-in default action** for any function that returns `void`,
`bool`, a numeric value, or a pointer. In C++11, it will additionally returns
the default-constructed value, if one exists for the given type.
To customize the default action for functions with return type `T`, use
[`DefaultValue<T>`](reference/mocking.md#DefaultValue). For example:
```cpp
// Sets the default action for return type std::unique_ptr<Buzz> to
// creating a new Buzz every time.
DefaultValue<std::unique_ptr<Buzz>>::SetFactory(
[] { return MakeUnique<Buzz>(AccessLevel::kInternal); });
// When this fires, the default action of MakeBuzz() will run, which
// will return a new Buzz object.
EXPECT_CALL(mock_buzzer_, MakeBuzz("hello")).Times(AnyNumber());
auto buzz1 = mock_buzzer_.MakeBuzz("hello");
auto buzz2 = mock_buzzer_.MakeBuzz("hello");
EXPECT_NE(buzz1, nullptr);
EXPECT_NE(buzz2, nullptr);
EXPECT_NE(buzz1, buzz2);
// Resets the default action for return type std::unique_ptr<Buzz>,
// to avoid interfere with other tests.
DefaultValue<std::unique_ptr<Buzz>>::Clear();
```
To customize the default action for a particular method of a specific mock
object, use [`ON_CALL`](reference/mocking.md#ON_CALL). `ON_CALL` has a similar
syntax to `EXPECT_CALL`, but it is used for setting default behaviors when you
do not require that the mock method is called. See
[Knowing When to Expect](gmock_cook_book.md#UseOnCall) for a more detailed
discussion.
## Setting Expectations {#ExpectCall}
See [`EXPECT_CALL`](reference/mocking.md#EXPECT_CALL) in the Mocking Reference.
## Matchers {#MatcherList}
See the [Matchers Reference](reference/matchers.md).
## Actions {#ActionList}
See the [Actions Reference](reference/actions.md).
## Cardinalities {#CardinalityList}
See the [`Times` clause](reference/mocking.md#EXPECT_CALL.Times) of
`EXPECT_CALL` in the Mocking Reference.
## Expectation Order
By default, expectations can be matched in *any* order. If some or all
expectations must be matched in a given order, you can use the
[`After` clause](reference/mocking.md#EXPECT_CALL.After) or
[`InSequence` clause](reference/mocking.md#EXPECT_CALL.InSequence) of
`EXPECT_CALL`, or use an [`InSequence` object](reference/mocking.md#InSequence).
## Verifying and Resetting a Mock
gMock will verify the expectations on a mock object when it is destructed, or
you can do it earlier:
```cpp
using ::testing::Mock;
...
// Verifies and removes the expectations on mock_obj;
// returns true if and only if successful.
Mock::VerifyAndClearExpectations(&mock_obj);
...
// Verifies and removes the expectations on mock_obj;
// also removes the default actions set by ON_CALL();
// returns true if and only if successful.
Mock::VerifyAndClear(&mock_obj);
```
Do not set new expectations after verifying and clearing a mock after its use.
Setting expectations after code that exercises the mock has undefined behavior.
See [Using Mocks in Tests](gmock_for_dummies.md#using-mocks-in-tests) for more
information.
You can also tell gMock that a mock object can be leaked and doesn't need to be
verified:
```cpp
Mock::AllowLeak(&mock_obj);
```
## Mock Classes
gMock defines a convenient mock class template
```cpp
class MockFunction<R(A1, ..., An)> {
public:
MOCK_METHOD(R, Call, (A1, ..., An));
};
```
See this [recipe](gmock_cook_book.md#using-check-points) for one application of
it.
## Flags
| Flag | Description |
| :----------------------------- | :---------------------------------------- |
| `--gmock_catch_leaked_mocks=0` | Don't report leaked mock objects as failures. |
| `--gmock_verbose=LEVEL` | Sets the default verbosity level (`info`, `warning`, or `error`) of Google Mock messages. |