Skip to content

question: Running on Mac Mini 24GB #112

@ilker-aktuna

Description

@ilker-aktuna

Topic

Other

Add-on version

No response

Access mode (if relevant)

None

Your question

Actually this is a very generic question. So I searched to see if someone has already asked it or not.
Maybe it would be better to ask this on Dİscord however where I live, Discord is forbidden.

So, please bear with me on this question:

I have a mac mini with 24GB ram and currently I am running Qwen3.5:9b on it with Ollama.
It works fine for chat.
But I want to control home assistant on it. So I provide access token and URL and basically it works. But it makes too many curl requests and that increases the used context and takes too long to process.
I tried other models (smaller qwen mostly) and llama.cpp instead of ollama.
But it did not get better.

So I am looking for a better , more effective way.
Is this addon faster than running OpenClaw separately ? Is it a better trained or optimized approach ?
If not, what advantage does it have over a standalone OpenClaw installation ?

I am running HA on Docker. So I am not able to try this out of the box. Any feedback on Mac Mini run would be nice.

Also, I'd appreciate any feedback on which llm model to use with OpenClaw on a Mac Mini to control HA.

What have you already tried?

No response

Relevant logs / errors (optional)


Extra context (optional)

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions