mirror of
https://github.com/ggml-org/llama.cpp.git
synced 2025-06-30 12:55:17 +00:00
docker : drop to CUDA 12.4 (#11869)
* docker : drop to CUDA 12.4 * docker : update readme [no ci]
This commit is contained in:
@ -1,6 +1,6 @@
|
|||||||
ARG UBUNTU_VERSION=22.04
|
ARG UBUNTU_VERSION=22.04
|
||||||
# This needs to generally match the container host's environment.
|
# This needs to generally match the container host's environment.
|
||||||
ARG CUDA_VERSION=12.6.0
|
ARG CUDA_VERSION=12.4.0
|
||||||
# Target the CUDA build image
|
# Target the CUDA build image
|
||||||
ARG BASE_CUDA_DEV_CONTAINER=nvidia/cuda:${CUDA_VERSION}-devel-ubuntu${UBUNTU_VERSION}
|
ARG BASE_CUDA_DEV_CONTAINER=nvidia/cuda:${CUDA_VERSION}-devel-ubuntu${UBUNTU_VERSION}
|
||||||
|
|
||||||
|
@ -69,7 +69,7 @@ You may want to pass in some different `ARGS`, depending on the CUDA environment
|
|||||||
|
|
||||||
The defaults are:
|
The defaults are:
|
||||||
|
|
||||||
- `CUDA_VERSION` set to `12.6.0`
|
- `CUDA_VERSION` set to `12.4.0`
|
||||||
- `CUDA_DOCKER_ARCH` set to the cmake build default, which includes all the supported architectures
|
- `CUDA_DOCKER_ARCH` set to the cmake build default, which includes all the supported architectures
|
||||||
|
|
||||||
The resulting images, are essentially the same as the non-CUDA images:
|
The resulting images, are essentially the same as the non-CUDA images:
|
||||||
|
Reference in New Issue
Block a user