Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 22 additions & 0 deletions docs/source/advanced_topics.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
Advanced Topics
===============

- `Just-in-Time Compilation`_

Just-in-Time Compilation
------------------------
cuVS uses the Just-in-Time (JIT) `Link-Time Optimization (LTO) <https://developer.nvidia.com/blog/cuda-12-0-compiler-support-for-runtime-lto-using-nvjitlink-library/>`_ compilation technology to compile certain kernels. When a JIT compilation is triggered, cuVS will compile the kernel for your architecture and automatically cache it in-memory and on-disk. The validity of the cache is as follows:

1. In-memory cache is valid for the lifetime of the process.
2. On-disk cache is valid until a CUDA driver upgrade is performed. The cache can be portably shared between machines in network or cloud storage and we strongly recommend that you store the cache in a persistent location. For more details on how to configure the on-disk cache, look at CUDA documentation on `JIT Compilation <https://docs.nvidia.com/cuda/cuda-programming-guide/05-appendices/environment-variables.html#jit-compilation>`_. Specifically, the environment variables of interest are: `CUDA_CACHE_PATH` and `CUDA_CACHE_MAX_SIZE`.


Thus, the JIT compilation is a one-time cost and you can expect no loss in real performance after the first compilation. We recommend that you run a "warmup" to trigger the JIT compilation before the actual usage.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you want to make it super mega obvious to people who deploy services based on cuvs in containers that "We really really strongly recommend you make sure the cache is stored in a persistent location so that containers don't have to warm up the cache after each restart"

Is it possible to include something that warms up the cache in my Dockerfile? So that the cache is built into the image?

I am not sure if I'd make the connection from reading the current docs, hence wondering if a really explicit "hit people over the head with it" call out would be useful.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that's a great idea. Let me add some phrasing to convey that very clearly.

Is it possible to include something that warms up the cache in my Dockerfile? So that the cache is built into the image?

You mean automatically?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does it read now?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good now. Let's see if people get it, if not can always tune this later.

Wasn't thinking of something automatic, more a command I can include in my Dockerfile as a RUN command

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you see the link that I added now, you can control where the cache is written with an environment variable. I'm hoping docker savvy users can now figure out the volume mount and environment variable connection.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Works for me


Currently, the following capabilities will trigger a JIT compilation:
- IVF Flat search APIs: :doc:`cuvs::neighbors::ivf_flat::search() <cpp_api/neighbors_ivf_flat>`

.. toctree::
:maxdepth: 2

jit_lto_guide
10 changes: 10 additions & 0 deletions docs/source/developer_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -406,3 +406,13 @@ void foo(const raft::resources& res, ...)
...
}
```

## Using Just-in-Time Link-Time Optimization

cuVS is moving to using link-time optimization for new kernels, and this requires some changes to the way kernels are written. Instead of compiling all kernel variants at build time (which leads to binary size explosion), JIT LTO compiles kernel fragments separately and links them together at runtime based on the specific configuration needed.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we link somewhere in the cuda docs in this paragraph? Maybe for "link time optimation"?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, can you provide an ever so brief summary of the perf implications? Maybe link to the cuda docs where appropriate for expectations?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First run perf implications are very kernel and hardware dependent.. the CUDA docs make no guarantees about that.


This approach ultimately enables:
- **Reduced binary size**: Compile fragments once, combine many ways
- **User Defined Functions**: Link UDFs in cuVS CUDA kernels

For more information on JIT LTO, see [Advanced Topics](advanced_topics). For a complete guide on implementing JIT LTO kernels, including step-by-step examples, see the [JIT LTO Guide](jit_lto_guide.md).
1 change: 1 addition & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -87,5 +87,6 @@ Contents
integrations.rst
cuvs_bench/index.rst
api_docs.rst
advanced_topics.rst
contributing.md
developer_guide.md
Loading
Loading