I have mixed feelings about this.
On the one hand, if one accepts Proton's premise that "Whether we like it or not, AI is here to stay […]", I would much prefer that future AI models preserved people's privacy in the way that Proton claims that their AI does.
On the other hand…
Artificial intelligence has the power to tackle humanity’s challenges, big and small, from scheduling meetings to modeling molecules.
Making hugely positive impacts on people's health and wellbeing is one promise I see with AI in general, and I'd like to support that. But this seems to be far into the future, and I'm not clear on how Proton's AI is contributing to this. Baby steps?
I don't view scheduling meetings as a human challenge at all, small or big, but apparently Proton does.
Whether you’re summarizing sensitive legal documents, asking private health questions, or rewriting personal emails, Lumo is there to help. It gives you an edge when you need it, without demanding personal data in return.
This statement by Proton concerns me slightly. While LLMs can be great at rewriting text, it's clear to me that, so far, AI models can generally not be trusted to output facts. I've seen so many examples of nonsense generated even by the supposedly most powerful models. I'm generally surprised that people I talk to use LLMs precisely for this purpose, such as asking them what schizophrenia is, or whether or not psychosis is caused by elevated dopamine levels. I can feel the temptation, because finding fact summaries from reliable websites through a search engine is harder these days, and people naturally want facts summarized in the very structured and easy to understand overview that these models generate.
I asked Lumo "Is psychosis caused by elevated dopamine levels?" with the web search option selected. It didn't provide any sources, so I couldn't scrutinize the output without doing a manual web search. I was a bit pleased that it ended the conversation by suggesting that I ask a medical professional. But when I asked it the same question again, it didn't give that advice.
I personally wouldn't encourage people to trust an LLM's output on "private health questions".
Proton appears to be joining the league of companies who advertise their LLMs' fact-generating abilities – a team of companies that are contributing to filling the web that I once loved with text "written by no one to communicate nothing". In this case, Proton does not even bother to add a disclaimer that their AI will occasionally generate false information.
Overall, I feel tempted to use their service, because it's probably better at giving me ideas for new texts than any local LLM I can run on my devices. But then again, the power my devices will consume is likely far less than whatever GPU farm Proton is using. And contributing to the popularity of energy-sucking machines is not something I feel particularly motivated by. As indicated, I find the marketing behind Lumo problematic. I'll likely not be using it.