V2EX gcod
gcod

gcod

V2EX member #171585, joined on 2016-05-03 21:59:48 +08:00
Today's activity rank 13561
Per gcod's settings, the topics list is hidden
Deals info, including closed deals, is not hidden
gcod's recent replies
@KaiWuBOSS 4B 果然也不行...

PS C:\Windows\system32> irm https://raw.githubusercontent.com/val1813/kaiwu/main/install.ps1 | iex
Kaiwu Installer
===============

Detected: windows/amd64
Fetching latest release...
Latest version: v0.1.6
Downloading https://github.com/val1813/kaiwu/releases/download/v0.1.6/kaiwu-windows-amd64.zip...

Kaiwu installed successfully!

Kaiwu v0.1.6

Get started:
kaiwu run Qwen3-30B-A3B

Note: restart your terminal for PATH changes to take effect.
PS C:\Windows\system32> kaiwu run Qwen3-4B







本地大模型部署器 vv0.1.6 llama.cpp b8864
by llmbbs.ai 本地 AI 技术社区

[1/6] Probing hardware...
GPU: NVIDIA GeForce GTX 1660 Ti (SM75, 6144 MB VRAM, 288 GB/s)
RAM: 31 GB DDR4
OS: windows amd64

[2/6] Selecting configuration...
Model: Qwen3-4B (dense, 4B)
Quant: q5-k-m (2.8 GB)
Mode: full_gpu
Accel: Flash Attention

[3/6] Checking files...
Using bundled iso3 binary: llama-server-cuda.exe
Binary: llama-server-cuda.exe [cached]
Model: Qwen3-4B-Q5_K_M.gguf [cached]

[4/6] Preflight check...
llama-server 不支持 iso3 ,回退到 q8_0/q4_0
VRAM sufficient

[5/6] Warmup benchmark...
Probe 1: ctx=8K ... OOM
Probe 2: ctx=4K ... OOM
Warmup failed: all ctx probes failed (tried down to 4K)
Using default parameters

[6/6] Starting server...
Waiting for llama-server to be ready (port 11434)...
显存不足,降低上下文至 4K 重试...
Waiting for llama-server to be ready (port 11434)...
Error: failed to start llama-server: 连续 2 次启动失败,即使最小上下文(4K)也无法运行

NVIDIA GeForce GTX 1660 Ti: 6144 MB VRAM
模型 Qwen3-4B: ~2867 MB
KV cache (4K, q4_0): ~112 MB
预估总需: ~4003 MB

建议:
1. 运行 kaiwu run qwen3-4b --reset 重新探测参数
2. 模型较小但仍 OOM ,可能是参数配置问题,请升级到最新版本

Usage:
kaiwu run <model> [flags]

Flags:
--bench Run benchmark after starting
--ctx-size int 手动指定上下文大小( 0=自动)
--fast Skip warmup, use cached profile
-h, --help help for run
--llama-server string 使用自定义 llama-server 二进制(完整路径)
--reset 清除缓存,重新 warmup 探测最优参数

PS C:\Windows\system32> kaiwu run qwen3-4b --reset







本地大模型部署器 vv0.1.6 llama.cpp b8864
by llmbbs.ai 本地 AI 技术社区

[1/6] Probing hardware...
GPU: NVIDIA GeForce GTX 1660 Ti (SM75, 6144 MB VRAM, 288 GB/s)
RAM: 31 GB DDR4
OS: windows amd64

[2/6] Selecting configuration...
Model: Qwen3-4B (dense, 4B)
Quant: q5-k-m (2.8 GB)
Mode: full_gpu
Accel: Flash Attention

[3/6] Checking files...
Using bundled iso3 binary: llama-server-cuda.exe
Binary: llama-server-cuda.exe [cached]
Model: Qwen3-4B-Q5_K_M.gguf [cached]

[4/6] Preflight check...
llama-server 不支持 iso3 ,回退到 q8_0/q4_0
VRAM sufficient

[5/6] Warmup benchmark...
已清除缓存,重新探测
Probe 1: ctx=8K ... OOM
Probe 2: ctx=4K ... OOM
Warmup failed: all ctx probes failed (tried down to 4K)
Using default parameters

[6/6] Starting server...
Waiting for llama-server to be ready (port 11434)...
显存不足,降低上下文至 4K 重试...
Waiting for llama-server to be ready (port 11434)...
Error: failed to start llama-server: 连续 2 次启动失败,即使最小上下文(4K)也无法运行

NVIDIA GeForce GTX 1660 Ti: 6144 MB VRAM
模型 Qwen3-4B: ~2867 MB
KV cache (4K, q4_0): ~112 MB
预估总需: ~4003 MB

建议:
1. 运行 kaiwu run qwen3-4b --reset 重新探测参数
2. 模型较小但仍 OOM ,可能是参数配置问题,请升级到最新版本

Usage:
kaiwu run <model> [flags]

Flags:
--bench Run benchmark after starting
--ctx-size int 手动指定上下文大小( 0=自动)
--fast Skip warmup, use cached profile
-h, --help help for run
--llama-server string 使用自定义 llama-server 二进制(完整路径)
--reset 清除缓存,重新 warmup 探测最优参数
PS C:\Windows\system32> kaiwu run Qwen3-1.7B --ctx-size 2048







本地大模型部署器 vv0.1.4 llama.cpp b8864
by llmbbs.ai 本地 AI 技术社区

[1/6] Probing hardware...
GPU: NVIDIA GeForce GTX 1660 Ti (SM75, 6144 MB VRAM, 0 GB/s)
RAM: 31 GB DDR4
OS: windows amd64

[2/6] Selecting configuration...
Model: Qwen3-1.7B (dense, 2B)
Quant: q5-k-m (1.2 GB)
Mode: full_gpu
Accel: Flash Attention

[3/6] Checking files...
Using bundled iso3 binary: llama-server-cuda.exe
Binary: llama-server-cuda.exe [cached]
Model: Qwen3-1.7B-Q5_K_M.gguf [cached]

[4/6] Preflight check...
llama-server 不支持 iso3 (或首次 JIT 编译超时),回退到 q8_0/q4_0
VRAM sufficient

[5/6] Warmup benchmark...
用户指定 ctx=2048 ,跳过缓存
User override: ctx=2K ... Warmup failed: user-specified ctx=2K failed to start (OOM?)
Using default parameters

[6/6] Starting server...
Waiting for llama-server to be ready (port 11434)...
显存不足,降低上下文至 4K 重试...
Waiting for llama-server to be ready (port 11434)...
Error: failed to start llama-server: 连续 2 次启动失败,即使最小上下文(4K)也无法运行

NVIDIA GeForce GTX 1660 Ti: 6144 MB VRAM
模型 Qwen3-1.7B: ~1228 MB
KV cache (4K, q4_0): ~112 MB
预估总需: ~2364 MB

建议:
1. 选择更小的量化 (Q2_K)
2. 选择更小的模型
3. 使用 MoE offload 模型( experts 放 CPU RAM )
Usage:
kaiwu run <model> [flags]

Flags:
--bench Run benchmark after starting
--ctx-size int 手动指定上下文大小( 0=自动)
--fast Skip warmup, use cached profile
-h, --help help for run
--llama-server string 使用自定义 llama-server 二进制(完整路径)
--reset 清除缓存,重新 warmup 探测最优参数



7 年前的老机子了 1660 Ti
方案本质是通过跳过 DDNS 域名回源、直接更新 CDN 源站 IP 来减少解析延迟和故障点,同时巧妙利用 ESA 免费支持 IPv4/IPv6 双栈访问。把 DDNS 、IPv6 动态监测和 CDN 配置自动化打包成 D-NET 工具,确实大幅简化了运维,很有实用创意
前提一定一定是你要知道你在做什么,而不是无脑的根据 AI 的答复来操作,有时候 AI 幻觉很致命.
电信的话便宜点儿,租的话,千兆大概 360 一年~
河南哪里呀
碰碰运气,重在参与 0.0
你图片多么,如果个人博客类的小站可以用我图床,小打小闹也运行 6.7 年了,
当然了,你如果图片很多且流量很大就算了,推荐自建,不要白嫖别人的了.
Nov 1, 2024
Replied to a topic by jqknono DNS 某厂商防止 DNS 拦截的大聪明手段
没拿到解析记录就会尝试重新获取,返回个假地址给他就好了
About     Help     Advertise     Blog     API     FAQ     Solana     5670 Online   Highest 6679       Select Language
创意工作者们的社区
World is powered by solitude
VERSION: 3.9.8.5 16ms UTC 08:45 PVG 16:45 LAX 01:45 JFK 04:45
Do have faith in what you're doing.
ubao msn snddm index pchome yahoo rakuten mypaper meadowduck bidyahoo youbao zxmzxm asda bnvcg cvbfg dfscv mmhjk xxddc yybgb zznbn ccubao uaitu acv GXCV ET GDG YH FG BCVB FJFH CBRE CBC GDG ET54 WRWR RWER WREW WRWER RWER SDG EW SF DSFSF fbbs ubao fhd dfg ewr dg df ewwr ewwr et ruyut utut dfg fgd gdfgt etg dfgt dfgd ert4 gd fgg wr 235 wer3 we vsdf sdf gdf ert xcv sdf rwer hfd dfg cvb rwf afb dfh jgh bmn lgh rty gfds cxv xcv xcs vdas fdf fgd cv sdf tert sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf shasha9178 shasha9178 shasha9178 shasha9178 shasha9178 liflif2 liflif2 liflif2 liflif2 liflif2 liblib3 liblib3 liblib3 liblib3 liblib3 zhazha444 zhazha444 zhazha444 zhazha444 zhazha444 dende5 dende denden denden2 denden21 fenfen9 fenf619 fen619 fenfe9 fe619 sdf sdf sdf sdf sdf zhazh90 zhazh0 zhaa50 zha90 zh590 zho zhoz zhozh zhozho zhozho2 lislis lls95 lili95 lils5 liss9 sdf0ty987 sdft876 sdft9876 sdf09876 sd0t9876 sdf0ty98 sdf0976 sdf0ty986 sdf0ty96 sdf0t76 sdf0876 df0ty98 sf0t876 sd0ty76 sdy76 sdf76 sdf0t76 sdf0ty9 sdf0ty98 sdf0ty987 sdf0ty98 sdf6676 sdf876 sd876 sd876 sdf6 sdf6 sdf9876 sdf0t sdf06 sdf0ty9776 sdf0ty9776 sdf0ty76 sdf8876 sdf0t sd6 sdf06 s688876 sd688 sdf86