aboutsummaryrefslogtreecommitdiffstats
path: root/OpenKeychain-API/example-app/src/main
diff options
context:
space:
mode:
Diffstat (limited to 'OpenKeychain-API/example-app/src/main')
0 files changed, 0 insertions, 0 deletions
'#n101'>101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561
# Googletest Primer

## Introduction: Why googletest?

*googletest* helps you write better C++ tests.

googletest is a testing framework developed by the Testing Technology team with
Google's specific requirements and constraints in mind. No matter whether you
work on Linux, Windows, or a Mac, if you write C++ code, googletest can help
you. And it supports *any* kind of tests, not just unit tests.

So what makes a good test, and how does googletest fit in? We believe:

1.  Tests should be *independent* and *repeatable*. It's a pain to debug a test
    that succeeds or fails as a result of other tests. googletest isolates the
    tests by running each of them on a different object. When a test fails,
    googletest allows you to run it in isolation for quick debugging.
1.  Tests should be well *organized* and reflect the structure of the tested
    code. googletest groups related tests into test suites that can share data
    and subroutines. This common pattern is easy to recognize and makes tests
    easy to maintain. Such consistency is especially helpful when people switch
    projects and start to work on a new code base.
1.  Tests should be *portable* and *reusable*. Google has a lot of code that is
    platform-neutral, its tests should also be platform-neutral. googletest
    works on different OSes, with different compilers, with or without
    exceptions, so googletest tests can work with a variety of configurations.
1.  When tests fail, they should provide as much *information* about the problem
    as possible. googletest doesn't stop at the first test failure. Instead, it
    only stops the current test and continues with the next. You can also set up
    tests that report non-fatal failures after which the current test continues.
    Thus, you can detect and fix multiple bugs in a single run-edit-compile
    cycle.
1.  The testing framework should liberate test writers from housekeeping chores
    and let them focus on the test *content*. googletest automatically keeps
    track of all tests defined, and doesn't require the user to enumerate them
    in order to run them.
1.  Tests should be *fast*. With googletest, you can reuse shared resources
    across tests and pay for the set-up/tear-down only once, without making
    tests depend on each other.

Since googletest is based on the popular xUnit architecture, you'll feel right
at home if you've used JUnit or PyUnit before. If not, it will take you about 10
minutes to learn the basics and get started. So let's go!

## Beware of the nomenclature

_Note:_ There might be some confusion of idea due to different
definitions of the terms _Test_, _Test Case_ and _Test Suite_, so beware
of misunderstanding these.

Historically, googletest started to use the term _Test Case_ for grouping
related tests, whereas current publications including the International Software
Testing Qualifications Board ([ISTQB](http://www.istqb.org/)) and various
textbooks on Software Quality use the term _[Test
Suite](http://glossary.istqb.org/search/test%20suite)_ for this.

The related term _Test_, as it is used in the googletest, is corresponding to
the term _[Test Case](http://glossary.istqb.org/search/test%20case)_ of ISTQB
and others.

The term _Test_ is commonly of broad enough sense, including ISTQB's definition
of _Test Case_, so it's not much of a problem here. But the term _Test Case_ as
was used in Google Test is of contradictory sense and thus confusing.

googletest recently started replacing the term _Test Case_ by _Test Suite_ The
preferred API is TestSuite*. The older TestCase* API is being slowly deprecated
and refactored away

So please be aware of the different definitions of the terms:

Meaning                                                                              | googletest Term         | [ISTQB](http://www.istqb.org/) Term
:----------------------------------------------------------------------------------- | :---------------------- | :----------------------------------
Exercise a particular program path with specific input values and verify the results | [TEST()](#simple-tests) | [Test Case](http://glossary.istqb.org/search/test%20case)

## Basic Concepts

When using googletest, you start by writing *assertions*, which are statements
that check whether a condition is true. An assertion's result can be *success*,
*nonfatal failure*, or *fatal failure*. If a fatal failure occurs, it aborts the
current function; otherwise the program continues normally.

*Tests* use assertions to verify the tested code's behavior. If a test crashes
or has a failed assertion, then it *fails*; otherwise it *succeeds*.

A *test suite* contains one or many tests. You should group your tests into test
suites that reflect the structure of the tested code. When multiple tests in a
test suite need to share common objects and subroutines, you can put them into a
*test fixture* class.

A *test program* can contain multiple test suites.

We'll now explain how to write a test program, starting at the individual
assertion level and building up to tests and test suites.

## Assertions

googletest assertions are macros that resemble function calls. You test a class
or function by making assertions about its behavior. When an assertion fails,
googletest prints the assertion's source file and line number location, along
with a failure message. You may also supply a custom failure message which will
be appended to googletest's message.

The assertions come in pairs that test the same thing but have different effects
on the current function. `ASSERT_*` versions generate fatal failures when they
fail, and **abort the current function**. `EXPECT_*` versions generate nonfatal
failures, which don't abort the current function. Usually `EXPECT_*` are
preferred, as they allow more than one failure to be reported in a test.
However, you should use `ASSERT_*` if it doesn't make sense to continue when the
assertion in question fails.

Since a failed `ASSERT_*` returns from the current function immediately,
possibly skipping clean-up code that comes after it, it may cause a space leak.
Depending on the nature of the leak, it may or may not be worth fixing - so keep
this in mind if you get a heap checker error in addition to assertion errors.

To provide a custom failure message, simply stream it into the macro using the
`<<` operator, or a sequence of such operators. An example:

```c++
ASSERT_EQ(x.size(), y.size()) << "Vectors x and y are of unequal length";

for (int i = 0; i < x.size(); ++i) {
  EXPECT_EQ(x[i], y[i]) << "Vectors x and y differ at index " << i;
}
```

Anything that can be streamed to an `ostream` can be streamed to an assertion
macro--in particular, C strings and `string` objects. If a wide string
(`wchar_t*`, `TCHAR*` in `UNICODE` mode on Windows, or `std::wstring`) is
streamed to an assertion, it will be translated to UTF-8 when printed.

### Basic Assertions

These assertions do basic true/false condition testing.

Fatal assertion            | Nonfatal assertion         | Verifies
-------------------------- | -------------------------- | --------------------
`ASSERT_TRUE(condition);`  | `EXPECT_TRUE(condition);`  | `condition` is true
`ASSERT_FALSE(condition);` | `EXPECT_FALSE(condition);` | `condition` is false

Remember, when they fail, `ASSERT_*` yields a fatal failure and returns from the
current function, while `EXPECT_*` yields a nonfatal failure, allowing the
function to continue running. In either case, an assertion failure means its
containing test fails.

**Availability**: Linux, Windows, Mac.

### Binary Comparison

This section describes assertions that compare two values.

Fatal assertion          | Nonfatal assertion       | Verifies
------------------------ | ------------------------ | --------------
`ASSERT_EQ(val1, val2);` | `EXPECT_EQ(val1, val2);` | `val1 == val2`
`ASSERT_NE(val1, val2);` | `EXPECT_NE(val1, val2);` | `val1 != val2`
`ASSERT_LT(val1, val2);` | `EXPECT_LT(val1, val2);` | `val1 < val2`
`ASSERT_LE(val1, val2);` | `EXPECT_LE(val1, val2);` | `val1 <= val2`
`ASSERT_GT(val1, val2);` | `EXPECT_GT(val1, val2);` | `val1 > val2`
`ASSERT_GE(val1, val2);` | `EXPECT_GE(val1, val2);` | `val1 >= val2`

Value arguments must be comparable by the assertion's comparison operator or
you'll get a compiler error. We used to require the arguments to support the
`<<` operator for streaming to an `ostream`, but it's no longer necessary. If
`<<` is supported, it will be called to print the arguments when the assertion
fails; otherwise googletest will attempt to print them in the best way it can.
For more details and how to customize the printing of the arguments, see
gMock [recipe](../../googlemock/docs/CookBook.md#teaching-google-mock-how-to-print-your-values).).