Granola says the product is private by default. Its actual behavior, according to reporting from The Verge, tells a more complicated story: notes are viewable to anyone with a link by default, and the app uses user notes for internal AI training unless users opt out.

That is not just a matter of fine print. It is a product-default problem. In software that handles meeting transcripts, personal context, and sensitive business conversations, the default permission model is the practical policy most users experience. If a note app quietly starts from link-sharing and trainable data, the operational posture is broader than a privacy-forward label suggests.

The link-sharing setting matters because it changes the exposure model before a user has made an affirmative choice. "Anyone with a link" is not the same as fully public, but it is also not restricted access. It creates a shareable object that can move beyond the original audience through forwarding, chat, email, or accidental paste into the wrong place. For technical teams, that raises the chance of unintended disclosure even when the app is being used for routine internal notes.

The training default matters just as much. According to the reporting, Granola includes user notes in internal AI training unless users explicitly opt out. That turns content entered for one purpose—capturing and summarizing meetings—into input for a separate product-improvement pipeline. From a governance standpoint, that is a material distinction. It affects data-use expectations, retention analysis, and whether a team sees the system as a closed workspace or as a source of model-training material.

The friction here is the story. Opt-out systems are not neutral just because they are documented. They rely on users noticing the setting, understanding what it does, and changing it. In practice, many people will not revisit defaults after onboarding, which means the starting configuration becomes the effective policy. That is why privacy language in the UI can be misleading when the default behavior still broadens access and reuse.

For AI note-taking products, this is the real technical tension: the features that make the app useful often depend on broader data access, persistent retention, and reuse across systems. Collaboration requires sharing. Model improvement benefits from more labeled, real-world content. But those same design choices also expand the trust surface. They determine whether the product behaves like a personal assistant, a shared workspace, or a data source feeding an improvement loop.

That distinction is especially important for enterprise buyers. A tool whose notes are link-viewable by default and whose content may enter training unless opted out is not just another lightweight productivity app in procurement terms. It triggers questions about access control, retention, downstream use, and whether employees can realistically control where sensitive meeting content goes. Security and compliance teams will read that as a controlled-risk deployment, not a casual default.

None of this means every AI note app works this way, or that the feature set is inherently inappropriate. It does mean that the trust profile of these products is being shaped less by policy pages than by operational choices embedded in the UX. Granola’s defaults show how easily a "private" positioning can be undercut by settings that favor shareability and model improvement over least-privilege design.

For technical readers, that is the point: this is not just bad UX. It is a product-design decision that changes the app’s governance posture before a user ever reads the terms.