Dear Qubers,

I would like to enquire if anyone has had any sucess with Ollama with
GPU passthrough?

I am using an Arch linux template that is working well for dedicated
video out and can play media and games (with stutter) which is already a
great convienance on Qubes.

However the main reason I built a passthrough set up was for Ollama /
Llama.cpp.

I've tried pci strict resetting, no dynamic memory balancing, dasharo
and standard bios. Everything works fine if I swap from qubes OS to
Arch.

I'm using ollama-rocm.

I used the gpu-passthrough guide at:
https://forum.qubes-os.org/t/create-a-gaming-hvm/19000/162

GPU is detected but loading does not progress beyond:
llm_load_tensors:        CPU buffer size =    35.44 MiB
It just hangs forever.

Full dump below.

If anyone has got any further or has any thoughts please let me know!



---------




[user@archlinux ~]$ ollama serve
2024 routes.go:1011: INFO server config env="map[OLLAMA_DEBUG:false 
OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 
OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 
OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/home/user/.ollama/models 
OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 
OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* 
https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* 
https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* 
https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: 
OLLAMA_TMPDIR:]"
time=2024 level=INFO source=images.go:725 msg="total blobs: 10"
time=2024 level=INFO source=images.go:732 msg="total unused blobs removed: 0"
time=2024 level=INFO source=routes.go:1057 msg="Listening on 127.0.0.1:11434 
(version 0.1.44)"
time=2024 level=INFO source=payload.go:30 msg="extracting embedded files" 
dir=/tmp/ollama478581086/runners
time=2024 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu rocm]"
time=2024 level=WARN source=amd_linux.go:48 msg="ollama recommends running the 
https://www.amd.com/en/support/linux-drivers"; error="amdgpu version file 
missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such 
file or directory"
time=2024 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 
gpu_type=gfx1030
time=2024 level=INFO source=types.go:71 msg="inference compute" id=0 
library=rocm compute=gfx1030 driver=0.0 name=1002:73bf total="16.0 GiB" 
available="16.0 GiB"
[GIN] 2024 | 200 |     1.05166ms |       127.0.0.1 | HEAD     "/"
[GIN] 2024 | 200 |    5.980785ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2024 | 200 |      1.4154ms |       127.0.0.1 | POST     "/api/show"
time=2024 level=WARN source=amd_linux.go:48 msg="ollama recommends running the 
https://www.amd.com/en/support/linux-drivers"; error="amdgpu version file 
missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such 
file or directory"
time=2024 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 
gpu_type=gfx1030
time=2024 level=INFO source=memory.go:133 msg="offload to gpu" 
layers.requested=-1 layers.real=25 memory.available="16.0 GiB" 
memory.required.full="1.6 GiB" memory.required.partial="1.6 GiB" 
memory.required.kv="384.0 MiB" memory.weights.total="703.4 MiB" 
memory.weights.repeating="651.8 MiB" memory.weights.nonrepeating="51.7 MiB" 
memory.graph.full="84.0 MiB" memory.graph.partial="122.7 MiB"
time=2024 level=INFO source=memory.go:133 msg="offload to gpu" 
layers.requested=-1 layers.real=25 memory.available="16.0 GiB" 
memory.required.full="1.6 GiB" memory.required.partial="1.6 GiB" 
memory.required.kv="384.0 MiB" memory.weights.total="703.4 MiB" 
memory.weights.repeating="651.8 MiB" memory.weights.nonrepeating="51.7 MiB" 
memory.graph.full="84.0 MiB" memory.graph.partial="122.7 MiB"
time=2024 level=INFO source=server.go:341 msg="starting llama server" 
cmd="/tmp/ollama478581086/runners/rocm/ollama_llama_server --model 
/home/user/.ollama/models/blobs/sha256- --ctx-size 2048 --batch-size 512 
--embedding --log-disable --n-gpu-layers 25 --parallel 1 --port 46093"
time=2024 level=INFO source=sched.go:338 msg="loaded runners" count=1
time=2024 level=INFO source=server.go:529 msg="waiting for llama runner to 
start responding"
time=2024 level=INFO source=server.go:567 msg="waiting for server to become 
available" status="llm server error"
INFO [main] build info | build=3051 commit="5921b8f08" tid="125143178271808" 
timestamp=1719024677
INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | 
AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | 
AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | 
FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | 
MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="125143178271808" timestamp=1719024677 
total_threads=8
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" 
port="46093" tid="125143178271808" timestamp=1719024677
llama_model_loader: loaded meta data with 26 key-value pairs and 219 tensors 
from /home/user/.ollama/models/blobs/sha256- (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not 
apply in this output.
llama_model_loader: - kv   0:                       general.architecture str    
          = llama
llama_model_loader: - kv   1:                               general.name str    
          = deepseek-ai
llama_model_loader: - kv   2:                       llama.context_length u32    
          = 16384
llama_model_loader: - kv   3:                     llama.embedding_length u32    
          = 2048
llama_model_loader: - kv   4:                          llama.block_count u32    
          = 24
llama_model_loader: - kv   5:                  llama.feed_forward_length u32    
          = 5504
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32    
          = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32    
          = 16
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32    
          = 16
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32    
          = 0.000001
llama_model_loader: - kv  10:                       llama.rope.freq_base f32    
          = 100000.000000
llama_model_loader: - kv  11:                    llama.rope.scaling.type str    
          = linear
llama_model_loader: - kv  12:                  llama.rope.scaling.factor f32    
          = 4.000000
llama_model_loader: - kv  13:                          general.file_type u32    
          = 2
llama_model_loader: - kv  14:                       tokenizer.ggml.model str    
          = gpt2
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens 
arr[str,32256]   = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  16:                      tokenizer.ggml.scores 
arr[f32,32256]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type 
arr[i32,32256]   = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges 
arr[str,31757]   = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32    
          = 32013
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32    
          = 32021
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32    
          = 32014
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool   
          = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool   
          = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str    
          = {% if not add_generation_prompt is de...
llama_model_loader: - kv  25:               general.quantization_version u32    
          = 2
llama_model_loader: - type  f32:   49 tensors
llama_model_loader: - type q4_0:  169 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.3583 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 32256
llm_load_print_meta: n_merges         = 31757
llm_load_print_meta: n_ctx_train      = 16384
llm_load_print_meta: n_embd           = 2048
llm_load_print_meta: n_head           = 16
llm_load_print_meta: n_head_kv        = 16
llm_load_print_meta: n_layer          = 24
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 2048
llm_load_print_meta: n_embd_v_gqa     = 2048
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 5504
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 100000.0
llm_load_print_meta: freq_scale_train = 0.25
llm_load_print_meta: n_yarn_orig_ctx  = 16384
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 1.35 B
llm_load_print_meta: model size       = 738.88 MiB (4.60 BPW) 
llm_load_print_meta: general.name     = deepseek-ai
llm_load_print_meta: BOS token        = 32013 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token        = 32021 '<|EOT|>'
llm_load_print_meta: PAD token        = 32014 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token         = 126 'Ä'
time=2024 level=INFO source=server.go:567 msg="waiting for server to become 
available" status="llm server loading model"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 ROCm devices:
  Device 0: AMD Radeon RX 6900 XT, compute capability 10.3, VMM: no
llm_load_tensors: ggml ctx size =    0.22 MiB
llm_load_tensors: offloading 24 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 25/25 layers to GPU
llm_load_tensors:      ROCm0 buffer size =   703.44 MiB
llm_load_tensors:        CPU buffer size =    35.44 MiB

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20240622030423.ddgmpq6i6jybhxvb%40host.localdomain.

Attachment: signature.asc
Description: PGP signature

Reply via email to