IMHO, it's just another cloud LLM inferencing service that promises no logging. It doesn't provide confidentiality of your conversations from the service provider itself. This stands in contrast to other services provided by Proton, where most of your data is encrypted client-side with your own keys, meaning Proton has no access to it.
Given that it's running models with 32B parameters, you can run these models locally on a modern PC with enough RAM. This gives you total control over your data. Unless it starts to offer larger models, I see little value in it.