This is a good thing for privacy, because you can run AI locally more efficiently.
Maybe my comment is unwelcome here, but I feel the need to point out - many AI applications are built on abuses of privacy, using training data that is impossible for the human creators of that data to meaningfully consent to being co-opted into the model.
For some, the best thing for privacy is to resist shiny new AI toys altogether, and discourage its use outside of a very narrow range of public datasets free of licensing claims or abusive data collection practices.
Here is an example of an AI application that is good or neutral for privacy and other human rights:
https://hrdag.org/tech-notes/large-language-models-IPNO.html
Here is an example that is definitely not:
https://openai.com/chatgpt
Here's a relevant article:
https://www.businessinsider.com/tech-updated-terms-to-use-customer-data-to-train-ai-2023-9
I understand that this is perhaps offtopic. Just food for thought for privacy enthusiasts contemplating their personal use of AI.