Using Noir with vLLM
Use vLLM, a high-throughput inference engine for LLMs, to run fast local code analysis with Noir.
Setup
-
Install vLLM: Follow the official installation guide.
-
Serve a Model:
vllm serve microsoft/phi-3This starts a local server with an OpenAI-compatible API endpoint.
Usage
noir -b ./spec/functional_test/fixtures/hahwul \
--ai-provider=vllm \
--ai-model=microsoft/phi-3