Protecting Data and Models: Citadel's Secure Collaborative Learning
Tue Nov 26 2024
Advertisement
Advertisement
In the world of machine learning, not everyone has the expertise to train models. That's why data owners often team up with model owners to create better, more generalized models. The problem? Privacy and confidentiality. Data owners want their data to stay secret, while model owners don't want their methods and intellectual property exposed. Existing solutions like federated learning and split learning can't protect both sides at once.
Enter Citadel, a system that uses Intel SGX to keep both data and models safe in untrusted environments. It sets up a strong barrier between training enclaves for data owners and an aggregator enclave for the model owner. This prevents any data or model leakage during training. Citadel also offers better scalability and stronger privacy guarantees compared to other SGX-protected systems.
But how does it work? Citadel performs distributed training across multiple enclaves. It uses zero-sum masking and hierarchical aggregation to maintain this strong information barrier. This means even if something goes wrong, your data and models are safe.
Tests show that Citadel can handle a large number of enclaves with only a small performance hit. This is great news for anyone looking to collaborate on machine learning projects without compromising privacy.
https://localnews.ai/article/protecting-data-and-models-citadels-secure-collaborative-learning-4337aaf4
continue reading...
actions
flag content