V2EX = way to explore
V2EX 是一个关于分享和探索的地方
Sign Up Now
For Existing Member  Sign In
stefwoo
1.29D
V2EX  ›  Local LLM

github 看到一个项目, 3090 跑 27B, 129tps,最高 207tps

  •  
  •   stefwoo · 16h 30m ago · 1458 views

    https://github.com/Luce-Org/lucebox-hub

    DFlash DDtree Qwen3.5 & Qwen3.6 27B GGUF on RTX 3090 First GGUF port of DFlash speculative decoding. Qwen3.5-27B on a single RTX 3090, Q4_K_M target + BF16 draft, DDTree budget=22.

    Up to 207 tok/s in the demo (207.6 tok/s DFlash vs 38.0 tok/s AR, 5.46×) 129.5 tok/s mean on the HumanEval 10-prompt bench 3.43× faster than autoregressive (+15% over chain speculative decoding) 2.8× faster than SGLang AWQ on the same hardware Up to 256K context in 24 GB via TurboQuant TQ3_0 KV cache (128K Q4_0 bench: 134.78 tok/s at ctx=131072)

    PFlash speculative prefill on RTX 3090 In-process speculative prefill, C++/CUDA only. A drafter (Qwen3-0.6B BF16) loaded directly into the dflash daemon scores per-token importance over a long prompt; the heavy target (Qwen3.6-27B Q4_K_M) only prefills the spans that matter. Both models share the same ggml allocator on a single RTX 3090. No Python, no Triton, no PyTorch at runtime — just the dflash binary and four custom CUDA kernels (mean_K → score → select → sparse_fwd) plus BSA (mit-han-lab/Block-Sparse-Attention, FA-2 derived, sm_80+) for the long-context drafter forward.

    ~10.4× TTFT on 128K context: 24.8 s dflash daemon vs ~257 s llama.cpp (FA on, Q4_0 KV). 10.0× TTFT on 64K context: 13.5 s dflash vs 134.95 s llama.cpp. NIAH single-needle retrieved at every measured context (32K → 128K), keep_ratio=0.05, DFLASH_FP_ALPHA=0.85.

    5 replies    2026-05-03 03:06:17 +08:00
    stefwoo
        1
    stefwoo  
    OP
       15h 56m ago
    为了把大模型和草稿小模型一起塞进 24G 显存,选 4-bit 量化(~16G ),草稿保持 BF16 (~1.2G ),KV 缓存用 quant 量化。
    预填充时,草稿小模型飞速扫遍长文本,只挑出最重要的 5% 片段;大模型只对这 5% 做稀疏预填充,跳过其余 95% 的无关内容。
    随后进入生成阶段:草稿模型一次幻想出多个候选 token ,大模型用树形注意力一次性验证整棵树,实现高速逐词解码。
    strobber16
        2
    strobber16  
       15h 21m ago
    这让我回去再看了一遍 bycloud 讲 speculative decoding 的那个视频。然后发现自己还是看不懂
    Hermitist
        3
    Hermitist  
       12h 48m ago
    全是跑 cuda 的, 可惜了我的 macbook air 测试不了
    beyondstars
        4
    beyondstars  
       11h 1m ago
    这么激进的 quantization 确定不影响模型实际表现吗?
    sddyzm
        5
    sddyzm  
    PRO
       10h 29m ago via iPhone   ❤️ 2
    27b 就是弱智
    About   ·   Help   ·   Advertise   ·   Blog   ·   API   ·   FAQ   ·   Solana   ·   2375 Online   Highest 6679   ·     Select Language
    创意工作者们的社区
    World is powered by solitude
    VERSION: 3.9.8.5 · 32ms · UTC 05:36 · PVG 13:36 · LAX 22:36 · JFK 01:36
    ♥ Do have faith in what you're doing.