Yevhen Klymentiev
dark
light
console
darkness
y.klymentiev@gmail.com
Coding Standards|Rules, conventions, and best practices I follow and recommend to teams

Testing

Ensure software correctness, prevent regressions, and enable safe refactoring by writing reliable, readable, and maintainable tests.

Tests should reflect the intended behavior of the system, validate critical logic, and serve as living documentation for how components and modules are expected to behave. Consistent testing practices increase confidence in deployments and improve overall development velocity.

Write tests that reflect behavior, not implementation

Tests should describe what the system does, not how it does it. Focus on observable behavior rather than internal details or temporary implementation choices. This makes tests more resilient to refactoring and easier to understand.

JavaScript
Copied!
1// Tests that rely on internal function calls
2expect(spyOn(validateToken)).toHaveBeenCalled();
JavaScript
Copied!
1// Tests expected behavior
2expect(getUserFromToken(token)).toEqual({ id: 123, name: 'Alice' });

Test the most critical paths first

Focus your testing efforts on the most important flows — business-critical logic, user interactions, and known edge cases. Coverage is useful, but prioritizing impact is more effective than chasing 100%.

JavaScript
Copied!
1// Tests utility that is rarely used
2describe('capitalizeFirstLetter', () => { ... });
JavaScript
Copied!
1// Tests core payment flow
2describe('createInvoice', () => {
3  it('throws on duplicate invoice ID', () => { ... });
4});

Use descriptive test and suite names

Write clear test descriptions that explain what is being tested and under what condition. Avoid vague or overly technical naming — the reader should understand the purpose without reading the test body.

JavaScript
Copied!
1test('should work');
2test('fails if bad data');
JavaScript
Copied!
1test('returns empty list when no records match');
2test('throws if required field is missing');

Keep tests isolated and deterministic

Each test should run independently of others and produce consistent results every time. Avoid shared state, time-based flakiness, or reliance on external systems. This improves reliability and reduces test flakiness.

JavaScript
Copied!
1// Relies on global state or previous test
2users.push({ id: 1 });
3expect(users.length).toBeGreaterThan(0);
JavaScript
Copied!
1// Uses fresh data for each test
2const users = [{ id: 1 }];
3expect(users).toHaveLength(1);

Use fixtures and factories for reusable data

Avoid duplicating mock data across tests. Use factory functions or fixture helpers to create consistent, customizable test inputs. This improves readability and reduces maintenance cost.

JavaScript
Copied!
1test('user has name', () => {
2  const user = { id: 1, name: 'Alice', email: 'a@example.com' };
3  ...
4});
JavaScript
Copied!
1const createUser = (overrides = {}) => ({
2  id: 1,
3  name: 'Test User',
4  email: 'user@example.com',
5  ...overrides,
6});
7
8test('user has name', () => {
9  const user = createUser();
10  ...
11});

Follow the Arrange–Act–Assert pattern

Structure each test clearly into three logical steps: set up test data (Arrange), run the function or trigger behavior (Act), and verify the result (Assert). This improves readability and consistency.

JavaScript
Copied!
1// Arrange
2const price = 100;
3
4// Act
5const result = calculateTax(price);
6
7// Assert
8expect(result).toBe(20);

Mock only external or unreliable dependencies

Use mocking to isolate the unit under test from external services (e.g., HTTP, DB, time). Avoid mocking internal logic — this can tightly couple tests to implementation and reduce coverage confidence.

JavaScript
Copied!
1// Mocking internal function
2jest.mock('./utils', () => ({
3  doSomething: jest.fn(() => 123),
4}));
JavaScript
Copied!
1// Mocking network call
2jest.spyOn(apiClient, 'fetchData').mockResolvedValue({ data: [] });

Keep tests fast and lightweight

Slow tests discourage frequent running and slow down CI. Avoid unnecessary async/waiting, large data sets, or multiple rerenders. Favor in-memory testing and mock time or I/O if needed.

JavaScript
Copied!
1await new Promise(res => setTimeout(res, 3000));
JavaScript
Copied!
1jest.useFakeTimers();
2jest.advanceTimersByTime(3000);

Balance unit, integration and E2E tests

Use the testing pyramid as a guideline: many fast unit tests, fewer integration tests, and even fewer full end-to-end tests. Each type has a purpose — aim for coverage without duplication or excessive cost.

  • Unit test: calculateTotal(items)

  • Integration test: POST /checkout with multiple items

  • E2E test: simulate user completing a purchase

Track and improve test coverage strategically

Code coverage is a helpful metric — but not a goal in itself. Use it to find untested logic, not to chase 100%. Focus coverage on critical paths, edge cases, and regressions rather than trivial code.

JavaScript
Copied!
1// Adds meaningless tests to boost coverage
2expect(true).toBe(true);
JavaScript
Copied!
1// Adds test for uncovered error path
2test('throws if email is invalid', () => {
3  expect(() => validateEmail('bad@')).toThrow();
4});

Treat tests as documentation

Write tests that clearly show how a function or module should be used. Avoid obscure or hyper-abstract tests. Good tests help new devs understand code behavior without reading the full implementation.

JavaScript
Copied!
1test('x works') {
2  expect(doX(1)).toBe(42);
3}
JavaScript
Copied!
1test('returns 42 when input is 1 (default multiplier)') {
2  expect(doX(1)).toBe(42);
3}

Don’t over-test or duplicate logic

Avoid writing tests that simply repeat the implementation logic or check obvious outcomes. Over-testing leads to brittle code and wasted maintenance effort. Focus on behavior and intent.

JavaScript
Copied!
1test('adds two numbers') {
2  expect(add(1, 2)).toBe(3); // only useful if add is literally a + b
3}

Integrate tests into CI pipeline

Always run tests as part of your continuous integration (CI) process. Prevent merging code that breaks existing functionality by enforcing test execution on every commit or pull request.

Bad:

  • Tests only run locally or manually

  • CI passes even if tests fail

Good:

  • npm test runs automatically in GitHub Actions, GitLab CI, etc.

  • PRs blocked unless test suite passes

Use fail-fast and clear output

Configure your test runner to fail early and show concise errors. This saves time during debugging and avoids noisy logs. Prioritize readability and quick feedback.

Bad:

  • Tests keep running after major failure

  • Output is hundreds of lines with unclear error sources

Good:

  • Fails fast with minimal logging

  • Uses --bail or similar flag

  • Output includes failing test names and file paths

Report coverage automatically

Generate and publish coverage reports to track changes over time. Many CI tools support integration with services like Codecov or Coveralls. Use this as a visibility tool, not as a hard rule.

Bad:

  • No feedback on how coverage evolves

  • Unknown gaps in test quality

Good:

  • Generates coverage summary (e.g., lines: 83%)

  • Uploads reports to external dashboard

  • Highlights diffs per commit/PR

Styleguide: Testing | Yevhen Klymentiev