Let’s start with a small but telling problem.
If you try to type “fhe” into Google Docs, there’s a decent chance it quietly “fixes” it to “the.”
Which is… kind of perfect.
Because for most people, FHE might as well not exist yet.
It’s either invisible, misunderstood, or autocorrected out of the conversation entirely.
This post is the beginning of a short series to change that.
No heavy math. No academic deep dives. Just a practical look at what this is, why it matters, and where it might actually go.
The Assumption We Don’t Question
There’s a quiet assumption baked into nearly every system we use:
If you want a computer to do something useful with your data it has to be able to see it.
We don’t really question this anymore. It’s just how things work.
Your data is:
- encrypted on your device
- encrypted while it travels
- encrypted when it’s stored
All good things.
But the moment you want something done with that data?
It gets decrypted.
The Part No One Talks About
Here’s the version of modern computing that doesn’t usually make the marketing page:
- Your data is encrypted
- It moves securely to a server
- It gets decrypted
- Compute happens
- It gets encrypted again
Step 3 is doing a lot of work here.
It’s also where:
- data is actually visible
- systems have to be trusted
- things can go sideways
We’ve built an entire industry around phrases like:
- “encrypted at rest”
- “encrypted in transit”
But there’s a missing third leg of the stool:
What about when the data is being used?
Because that’s the moment it’s most exposed.
Why Do We Decrypt It in the First Place?
At some point, someone effectively made this call:
“If we want to compute on data… we should probably be able to read it.”
And honestly? They weren’t wrong.
Early computing wasn’t exactly overflowing with options. There was no secret menu where you could check the box for “do math on encrypted data.” That didn’t exist yet.
So the model became:
- Lock the data when it’s moving around
- Lock it when it’s sitting still
- Unlock it when you actually need to use it
It’s less of a grand design decision, and more of a very practical one.
You could say the industry hit the “easy button.”
But it wasn’t laziness, it was reality.
It’s also worth remembering that this pattern isn’t unique to FHE.
Encryption in transit, the thing we now take for granted every time we buy something online, used to be expensive, slow, and operationally painful too.
There was a time when protecting data moving across the internet felt impractical at scale.
Then hardware improved. Systems adapted. Expectations changed.
And suddenly, encrypted traffic became normal.
That shift is what made modern e-commerce possible.
Your credit card moving safely across the internet stopped being a luxury feature and became table stakes.
We also didn’t fully understand what was coming.
- How much data we’d create.
- How valuable that data would become.
- How much of modern business would depend on moving sensitive information between systems we didn’t fully control.
Looking back, encrypting data in transit feels obvious.
At the time, it didn’t.
FHE may be sitting in a similar moment now.
Not yet default.Not yet easy.But increasingly difficult to ignore.
Computers operate on data they can understand.
Encrypted data, by design, looks like nonsense.
So if you want to run analytics, render a dashboard, train a model, or even just calculate a total, you have to turn that “nonsense” back into something meaningful.
Which means decrypting it.
If there had been a “compute on encrypted data” checkbox in 1998, someone would’ve clicked it.
It just wasn’t there.
The Trade We Quietly Accepted
Over time, that pattern hardened into architecture.
And we all, more or less, agreed to the trade:
“We’ll protect data everywhere…except for the exact moment we need it most.”
Not because it was ideal.
Because it was the only thing that worked.
Trust, But… You Know
To make today’s systems work, you’re implicitly trusting:
- the company running the service
- their employees
- their infrastructure
- their security practices
- their “we take your privacy seriously” page
And to be fair, most of the time, that works.
Until it doesn’t.
Data breaches, insider access, misconfigurations…
they all tend to trace back to one simple reality:
At some point, the data had to be visible to be useful.
What If That Step Just… Went Away?
Now imagine a slightly different model.
- You send your data to a system.
- It stays encrypted the entire time.
- Compute happens anyway.
- You get a result back, still encrypted.
- Only you can unlock it.
That’s Fully Homomorphic Encryption (FHE).
It allows computation on encrypted data, without ever decrypting it.
No peek behind the curtain.
No “temporary exposure.”
No trust required at that exposure step… because the step doesn’t exist.
A Less Technical Way to Think About It
Think of your data as something locked inside a box.
Traditional computing:
- You hand over the box
- Someone opens it
- Does the work
- Closes it back up
With FHE:
- You hand over the locked box
- They never open it
- And somehow… they still get the job done
It sounds a little absurd.
That’s because, for a long time, it basically was.
Why This Hasn’t Taken Over (Yet)
If this is so useful, why isn’t everything built this way already?
Two honest answers:
1. It’s expensive (computationally)
Doing math on encrypted data is hard.
Historically, it’s been very slow compared to normal computation.
2. It’s not simple to implement
There are tradeoffs:
- accuracy vs performance
- different encryption schemes
- operational complexity
This hasn’t been a “drop-in library and call it a day” situation.
Why It’s Entering the Conversation
For a long time, FHE lived in the “really interesting… but not practical” category.
Academically impressive. Operationally unrealistic.
That’s starting to change.
Not all at once. Not everywhere. But enough that it’s showing up in real conversations.
A few things are converging:
The math got better
Newer schemes—like CKKS—made it possible to work with real numbers and approximate computations in ways that actually map to real workloads.
Not perfect.
But usable.
The hardware is finally catching up
For years, with CPUs, FHE was like bringing a bicycle to a freeway.
Now you’ve also got GPUs and FPGAs, which allow the kind of parallel compute that makes previously impractical workloads… less impractical.
And then there’s a newer category starting to emerge: Custom-built silicon.
At Niobium, we’re building a purpose-built accelerator, an ASIC designed specifically for this kind of workload.
But hardware is only part of the story.
We’re also building a cloud platform we call Niobium Fog™, designed to let customers and partners run encrypted workloads and applications using that same purpose-built silicon.
Because building faster hardware is useful.
Making it accessible is what actually changes adoption.
We’ll save that deeper story for another post, but it’s a signal of something important: when people start building custom hardware for a problem, we’re no longer asking if it matters.
We’re betting that it does.
The tooling is also emerging. FHE used to require teams to “roll your own cryptography,” which is a polite way of saying: not worth it
Now there are actual libraries, frameworks, and teams, including Niobium, thinking about:
- developer experience
- abstractions
- integration into real systems
It’s still early. But it’s no longer empty.
And the pressure is real:
- AI models need data.
- Regulations restrict data.
- Companies don’t want to share data.
- Users trust providers less than ever.
We’ve created a world where:
- data is valuable
- data is sensitive
- and data is increasingly hard to use safely
FHE sits right in the middle of that tension.
The Inflection Point
FHE didn’t suddenly become easy.
But it crossed an important line:
From “interesting theory” to “maybe we can actually use this in a few places.”
And once something crosses that line, people start paying attention.
Why This Actually Matters
This isn’t just about making encryption stronger.
It’s about removing a constraint that’s been there from the beginning: that data has to be exposed to be useful.
If that constraint disappears—even partially—you start to unlock things like:
- analyzing sensitive financial data without exposing it
- training models across datasets that can’t be shared
- using cloud infrastructure without giving it visibility into your data
- collaborating across organizations without handing anything over
That’s not just better security.
That’s new capability.
Trust Is the Real Conversation
One of the things this post keeps circling back to—without fully saying it yet—is trust.
Not the warm, fuzzy kind.The cryptographic kind.
Because once you start talking about encrypted computation, the obvious next question becomes:
Who exactly are we trusting here?
- Your cloud provider?
- Your infrastructure team?
- The third-party analytics platform?
- That vendor who swears they’re “security-first” because they added two-factor auth and a dark mode?
This is where things get fun.
There’s an entire category of assumptions around parties being what cryptographers politely call “honest-but-curious”—also known as semi-honest.
Which sounds far nicer than it really is.
Translation:
- They’ll follow the rules.
- They’ll run the computation.
- They won’t technically steal your data.
…but they are absolutely peeking over your shoulder the entire time.
Think:
- “I would never open your mail…
- …but if the envelope happened to be transparent, I’m not not looking.”
A surprising amount of modern security architecture lives right here.
Not defending against movie-villain hackers in hoodies.
Defending against normal, rational systems where everyone behaves correctly and still want to know more than they should.
And honestly, that’s the quiet assumption behind most of modern cloud computing.
- We trust systems to behave.
- We trust providers to operate correctly.
- But we still worry about visibility.
That question, who can see what, when, and under what assumptions, is where Fully Homomorphic Encryption stops being a math paper and starts becoming a product conversation.
Because FHE isn’t just about encrypting data.
It’s about changing the trust model entirely.
Instead of saying: “We trust you not to look.”
…it lets us say: “You don’t get the ability to look in the first place.”
That’s a very different architecture.
And honestly, that’s where things get really interesting.
We’ll get into that next.
What This Really Means
If you strip everything else away:
FHE lets you compute on data without ever decrypting it.
That’s the idea.
It’s simple to say. Much harder to do.
What’s Next
This is the first post in a short FHE 101 series.
Next up:
We’re going to dig into one of the most important, and most overlooked, parts of privacy-preserving systems:
Who exactly are we trusting?
Because once you move past the magic trick of “compute on encrypted data,” the real conversation starts.
Not with math. With assumptions.
- Who runs the infrastructure?
- Who holds the keys?
- Who can see metadata?
- Who is considered “honest-but-curious”?
And what happens when “trust us” stops being a sufficient security model?
This is where terms like semi-honest adversaries, secure multi-party computation, encrypted search, and privacy-preserving collaboration start to matter.
It sounds academic, but it very quickly becomes product design.
Because security claims without a threat model are mostly just marketing.
And the difference between:
“We promise not to look”
and
“We technically can’t look”
…is where Niobium is headed.
We’ll talk more about that in the next post in this series.
In the meantime, autocorrect will probably still try to turn “fhe” into “the.”
…
If you’re the kind of person who skips ahead in the textbook, jumps to level 7 before finishing level 2, or immediately asks for the boss fight before the tutorial…
…and you’d like a practical primer on actually building an FHE application, we’ve got you covered.
Check out out another one of our blog posts, Building Your First FHE Application
Think of it as the “some assembly required” version of this conversation.