We're hiring engineers!

    Join us

    The distributed GPU cluster for LLM inference onSolana

    Get paid to contribute your idle compute power to the Kuzco network. Use the network to inference popular models like Llama3.1 and others using an OpenAI compatible API.

    Available on MacOS, Windows, and Linux.