View on GitHub

Lightspeed core service

Lightspeed core service

CONTRIBUTING

TLDR;

  1. Create your own fork of the repo
  2. Make changes to the code in your fork
  3. Run unit tests and integration tests
  4. Check the code with linters
  5. Submit PR from your fork to main branch of the project repo

Prerequisites

The development requires at least Python 3.12 due to significant improvement on performance, optimizations which benefit modern ML, AI, LLM, NL stacks, and improved asynchronous processing capabilities. It is also possible to use Python 3.13.

Tooling installation

  1. pip install --user uv
  2. uv --version – should return no error

Setting up your development environment

# clone your fork
git clone https://github.com/YOUR-GIT-PROFILE/lightspeed-stack.git

# move into the directory
cd lightspeed-stack

# setup your devel environment with uv
uv sync --group dev

# Now you can run test commands trough make targets, or prefix the rest of commands with `uv run`, eg. `uv run make test` or do `uv venv`, which creates virtual environment and prints activation command, and run commands inside venv.

# run unit tests
make test-unit

# run integration tests
make test-integration

# code formatting
# (this is also run automatically as part of pre-commit hook if configured)
make format

# code style and docstring style
# (this is also run automatically as part of pre-commit hook if configured)
make verify

# check type hints
# (this is also run automatically as part of pre-commit hook)
make check-types

Happy hacking!

PR description

Definition of Done

A deliverable is to be considered “done” when

Feature design process

When implementing a new feature or significant change, we take the proposal, run a spike for it, present the findings for review, make decisions about the scope and design, create a permanent document with the feature specification, and file ready-for-implementation JIRA tickets.

The steps below reference Claude Code slash commands (e.g., /spike) that automate parts of the process. The underlying skill definitions are in .claude/commands/ and can be adapted for other AI coding tools.

  1. Run a spike (/spike or /spike LCORE-1234 or /spike 1234) — research the problem, evaluate design alternatives, build a PoC if needed, document decisions and recommendations. The spike produces two documents and a set of proposed JIRAs. → How to run a spike

  2. Write a spec doc (/spec-doc) — the permanent in-repo feature spec. Records the approved design: requirements, architecture, implementation suggestions. All implementation JIRAs reference it. → How to write a spec doc

  3. Get decisions confirmed — open a PR with the spike doc and spec doc. Reviewers confirm or override the design decisions.

  4. File implementation tickets (/file-jiras or dev-tools/file-jiras.sh --spike-doc <path> --feature-ticket <key>) — once decisions are confirmed, file the JIRA sub-tickets. Each ticket references the spec doc. The tool auto-creates an Epic under the feature ticket and files children under it.

  5. Update spike doc with filed ticket numbers — replace LCORE-???? placeholders with actual ticket keys.

If the feature is well-understood and doesn’t need research, skip step 1 and start at step 2.

Templates for all of the above are in docs/contributing/templates/. If a PoC is part of the spike, see how to organize PoC output.

CLI tools (work without Claude Code):

AI assistants

“Mark” code with substantial AI-generated portions.

Nontrivial and substantial AI-generated or AI-assisted content should be “marked” in appropriate cases. In deciding how to approach this, consider adopting one or more of the following recommendations. (This assumes you have not concluded that a suggestion is a match to some existing third-party code.)

In a commit message, or in a pull request/merge request description field, identify the code assistant that you used, perhaps elaborating on how it was used. You may wish to use a trailer like “Assisted-by:” or “Generated-by:”. For example:

Assisted-by: <name of code assistant>

In a source file comment, indicate the use of the code assistant. For example:

Generated by: <name of code assistant>

If the contents of an entire file or files in PR were substantially generated by a code assistant with little to no creative input or modification by you (which should typically not be the case), copyright protection may be limited, but it is particularly appropriate to mark the contents of the file as recommended above.

Automation

Pre-commit hook settings

It is possible to run formatters and linters automatically for all commits. You just need to copy file hooks/pre-commit into subdirectory .git/hooks/. It must be done manually because the copied file is an executable script (so from GIT point of view it is unsafe to enable it automatically).

Code coverage measurement

During testing, code coverage is measured. If the coverage is below the defined threshold (see pyproject.toml settings for actual value stored in section [tool.coverage.report]), tests will fail. We measured and checked code coverage in order to be able to develop software with high quality.

Code coverage reports are generated in JSON and also in format compatible with JUnit test automation framework. It is also possible to start make coverage-report to generate code coverage reports in the form of interactive HTML pages. These pages are stored in the htmlcov subdirectory. Just open the index page from this subdirectory in your web browser.

Linters

Black, Ruff, Pyright, Pylint, Pydocstyle, Mypy, and Bandit tools are used as linters. There are a bunch of linter rules enabled for this repository. All of them are specified in pyproject.toml, such as in sections [tool.ruff] and [tool.pylint."MESSAGES CONTROL"]. Some specific rules can be disabled using ignore parameter (empty now).

Type hints checks

It is possible to check if type hints added into the code are correct and whether assignments, function calls etc. use values of the right type. This check is invoked by following command:

make check-types

Please note that type hints check might be very slow on the first run. Subsequent runs are much faster thanks to the cache that Mypy uses. This check is part of a CI job that verifies sources.

Ruff

List of all Ruff rules recognized by Ruff can be retrieved by:

ruff linter

Description of all Ruff rules are available on https://docs.astral.sh/ruff/rules/

Ruff rules can be disabled in source code (for a given line or block) by using a special noqa comment line. For example:

# noqa: E501

Pylint

List of all Pylint rules can be retrieved by:

pylint --list-msgs

Description of all rules are available on https://pylint.readthedocs.io/en/latest/user_guide/checkers/features.html

To disable Pylint rule in source code, the comment line in following format can be used:

# pylint: disable=C0415

Security checks

Static security check is performed by Bandit tool. The check can be started by using:

make security-check

Code style

Function Standards

Documentation

All functions require docstrings with brief descriptions

Type annotations

Use complete type annotations for parameters and return types

Naming conventions

Use snake_case with descriptive, action-oriented names (get_, validate_, check_)

Async functions

Use async def for I/O operations and external API calls

Error handling

Formatting rules

Code formatting rules are checked by Black. More info can be found on https://black.readthedocs.io/en/stable/.

Docstrings style

We are using Google’s docstring style.

Here is simple example:

def function_with_pep484_type_annotations(param1: int, param2: str) -> bool:
    """Example function with PEP 484 type annotations.
    
    Args:
        param1: The first parameter.
        param2: The second parameter.
    
    Returns:
        The return value. True for success, False otherwise.
    
    Raises:
        ValueError: If the first parameter does not contain proper model name
    """

For further guidance, see the rest of our codebase, or check sources online. There are many, e.g. this one.