Using Noir with vLLM

Use vLLM, a high-throughput inference engine for LLMs, to run fast local code analysis with Noir.

Setup

  1. Install vLLM: Follow the official installation guide.

  2. Serve a Model:

    vllm serve microsoft/phi-3
    

    This starts a local server with an OpenAI-compatible API endpoint.

Usage

noir -b ./spec/functional_test/fixtures/hahwul \
     --ai-provider=vllm \
     --ai-model=microsoft/phi-3
Esc